简体   繁体   中英

How to debug if weight keep increasing. Pytorch program

I m having some doubt when practicing Pytorch program.

I have function like y = m1x1 + m2x2 + c (just 2 weights to learn here). The expected values of weight should be 16,-14 and bias should be 36. But in every epoch the learned wight goes very big. Can any one help me to debug and understand this 20 lines of code, what going wrong here.

import torch

x = torch.randint(size = (1,2), high = 10)
w = torch.Tensor([16,-14])
b = 36
#Compute Ground Truth
y = w * x + b

#Find weights by program
epoch = 20
learning_rate = 30

#initialize random
w1 = torch.rand(size= (1,2), requires_grad= True)
b1 = torch.ones(size = [1], requires_grad= True)

for i in range(epoch):
    y1 = w1 * x + b1

    #loss function RMSQ
    loss = torch.sum((y1-y)**2)

    #Find gradient 
    loss.backward()

    with torch.no_grad():
        #update parameters
        w1 -= (learning_rate * w1.grad)
        b1 -= (learning_rate * b1.grad)

        w1.grad.zero_()
        b1.grad.zero_()

    print("B ", b1)  
    print("W ", w1)

Thanks, Ganesh

You have a very large learning rate.

This is an illustration from Jeremy Jordan's blog that explains exactly what is going on in your case.

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM