简体   繁体   中英

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: PyTorch error

I am trying to run some code in PyTorch but I got stacked at this point:

At first iteration, both backward operations, for Discriminator and Generator are running well

....

self.G_loss.backward(retain_graph=True)

self.D_loss.backward()

...

At the second iteration, when self.G_loss.backward(retain_graph=True) executes, I get this error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

According to torch.autograd.set_detect_anomaly , the last of the following lines in the Discriminator network, is responsible for this:

    bottleneck = bottleneck[:-1]
    self.embedding = x.view(x.size(0), -1)
    self.logit = self.layers[-1](self.embedding)

The strange thing is that I have used that network architecture in other code where it worked properly. Any suggestions?

The full error:

    site-packages\torch\autograd\__init__.py", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

通过删除带有loss += loss_val行的代码解决

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM