简体   繁体   English

在 pytorch 中,如何训练具有两个或更多输出的 model?

[英]In pytorch, how to train a model with two or more outputs?

output_1, output_2 = model(x)
loss = cross_entropy_loss(output_1, target_1)
loss.backward()
optimizer.step()

loss = cross_entropy_loss(output_2, target_2)
loss.backward()
optimizer.step()

However, when I run this piece of code, I got this error:但是,当我运行这段代码时,我得到了这个错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 4]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Then, I really wanna know what I am supposed to do to train a model with 2 or more outputs然后,我真的很想知道我应该做什么来训练具有 2 个或更多输出的 model

The entire premise on which pytorch (and other DL frameworks) is founded on is the backporpagation of the gradients of a scalar loss function. pytorch(和其他 DL 框架)的整个前提是标量损失 function 的梯度的反向传播。
In your case, you have a vector (of dim=2) loss function:在您的情况下,您有一个向量(dim=2)损失 function:

[cross_entropy_loss(output_1, target_1), cross_entropy_loss(output_2, target_2)]

You need to decide how to combine these two losses into a single scalar loss.您需要决定如何将这两个损失组合成一个标量损失。
For instance:例如:

weight = 0.5  # relative weight
loss = weight * cross_entropy_loss(output_1, target_1) + (1. - weight) * cross_entropy_loss(output_2, target_2)
# now loss is a scalar
loss.backward()
optimizer.step()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM