简体   繁体   English

RuntimeError:梯度计算所需的变量之一已被原位操作修改:PyTorch 错误

[英]RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: PyTorch error

I am trying to run some code in PyTorch but I got stacked at this point:我正在尝试在 PyTorch 中运行一些代码,但此时我已堆积如山:

At first iteration, both backward operations, for Discriminator and Generator are running well在第一次迭代中,Discriminator 和 Generator 的反向操作都运行良好

....

self.G_loss.backward(retain_graph=True)

self.D_loss.backward()

...

At the second iteration, when self.G_loss.backward(retain_graph=True) executes, I get this error:在第二次迭代中,当self.G_loss.backward(retain_graph=True)执行时,我收到此错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

According to torch.autograd.set_detect_anomaly , the last of the following lines in the Discriminator network, is responsible for this:根据torch.autograd.set_detect_anomaly ,鉴别器网络中以下最后几行负责此:

    bottleneck = bottleneck[:-1]
    self.embedding = x.view(x.size(0), -1)
    self.logit = self.layers[-1](self.embedding)

The strange thing is that I have used that network architecture in other code where it worked properly.奇怪的是,我在其他代码中使用了该网络架构,它可以正常工作。 Any suggestions?有什么建议?

The full error:完整的错误:

    site-packages\torch\autograd\__init__.py", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

通过删除带有loss += loss_val行的代码解决

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Pytorch LSTM-VAE Sentence Generator:RuntimeError:梯度计算所需的变量之一已被就地操作修改 - Pytorch LSTM- VAE Sentence Generator: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation 如何查找导致 RuntimeError 的变量:梯度计算所需的变量之一已被原位操作修改 - How to find which variable is causing RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation RuntimeError:梯度计算所需的变量之一已被就地操作修改? - RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation? RuntimeError:梯度计算所需的变量之一已被就地操作修改 - RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation 无法修复:RuntimeError:梯度计算所需的变量之一已被就地操作修改 - Can't fix: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation 梯度计算所需的变量之一已被原位操作修改: - one of the variables needed for gradient computation has been modified by an inplace operation: PyTorch 梯度计算所需的变量之一已通过就地操作进行了修改 - PyTorch one of the variables needed for gradient computation has been modified by an inplace operation 错误:梯度计算所需的变量之一已被就地操作修改 - Error: one of the variables needed for gradient computation has been modified by an inplace operation 梯度计算所需的变量之一已被就地操作修改:找不到就地操作 - one of the variables needed for gradient computation has been modified by an inplace operation : can't find inplace operation 梯度计算所需的变量之一已通过就地操作进行了修改:[torch.cuda.FloatTensor [640]] 为版本 4; - one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [640]] is at version 4;
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM