简体   繁体   中英

Is there any essential difference between * and + for Pytorch autograd?

I was trying to understand the autograd mechanism in more depth. To test my understanding, I tried to write the following code which I expected to produce an error (ie, Trying to backward through the graph a second time ).

b = torch.Tensor([0.5])
for i in range(5):
    b.data.zero_().add_(0.5)
    b = b + a
    c = b*a
    c.backward()

Apparently, it should report an error when c.backward() is called for the second time in the for loop, because the history of b has been freed, however, nothing happens.

But when I tried to change b + a to b * a as follows,

b = torch.Tensor([0.5])
for i in range(5):
    b.data.zero_().add_(0.5)
    b = b * a
    c = b*a
    c.backward()

It did report the error I was expecting. This looks pretty weird to me. I don't understand why there is no error evoked for the former case, and why it makes a difference to change from + to *.

The difference is adding a constant doesn't change the gradient, but mull by const does. It seems, autograd is aware of it, and optimizes out 'b = b + a'.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM