简体   繁体   中英

Is there any option at Pytorch autograd function for these problem?

Sorry for the vague title because I don't know exactly how to ask the question.

I'm using pytorch's autograd function right now and I'm struggling with the results I don't understand.

In common sense, the grad calculated by the loss is how far each parameter should go in the direction where the loss is minimized. Because it doesn't make sense for the value to change just because the scale has changed.

It means $$ grad(loss) = 5 grad(loss \frac 1 5) $$

But my actual result doesn;t. So

formulation explanation

And this is my actual code:)

from torch.autograd import grad
train_loss = loss(models(adaptation_data), adaptation_labels)
grads = grad(train_loss , models.parameters(),create_graph=True)
grads_02 = grad(train_loss*0.2 , models.parameters(),create_graph=True)
grads[-1] == grads_02[-1] * 5
#result : False

the whole code

Maybe I'm doing something wrong or there is an option for this for the grad function, but can anyone tell me? please

Your code screenshot shows that the two tensors are different due to floating point truncation error. Do not compare them with == sign, use isclose() function instead

torch.isclose(grads[-1], grads_02[-1] * 5)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM