简体   繁体   English

PyTorch 中的简单 L1 丢失

[英]Simple L1 loss in PyTorch

I want to calculate L1 loss in a neural network, I came across this example at https://discuss.pytorch.org/t/simple-l2-regularization/139/2 , but there are some errors in this code.我想计算神经网络中的 L1 损失,我在https://discuss.pytorch.org/t/simple-l2-regularization/139/2遇到了这个例子,但是这段代码中有一些错误。

Is this really how to calculate L1 Loss in a NN or is there a simpler way?这真的是如何计算 NN 中的 L1 Loss 还是有更简单的方法?

l1_crit = nn.L1Loss()
reg_loss = 0
for param in model.parameters():
    reg_loss += l1_crit(param)

factor = 0.0005
loss += factor * reg_loss

Is this equivalent in any way to simple doing:这是否等同于简单的操作:

loss = torch.nn.L1Loss()

I assume not, because I am not passing along any network parameters.我假设不是,因为我没有传递任何网络参数。 Just checking if there isn existing function to do this.只需检查是否存在现有的 function 即可执行此操作。

If I am understanding well, you want to compute the L1 loss of your model (as you say in the begining).如果我理解得很好,你想计算你的 model 的 L1 损失(正如你在开始时所说的那样)。 However I think you might got confused with the discussion in the pytorch forum.但是我认为您可能对 pytorch 论坛中的讨论感到困惑。

From what I understand, in the Pytorch forums, and the code you posted, the author is trying to normalize the network weights with L1 regularization.据我了解,在 Pytorch 论坛和您发布的代码中,作者正在尝试使用 L1 正则化来规范化网络权重。 So it is trying to enforce that weights values fall in a sensible range (not too big, not too small).所以它试图强制权重值落在一个合理的范围内(不要太大,也不要太小)。 That is weights normalization using L1 normalization (that is why it is using model.parameters() ).那就是使用 L1 归一化的权重归一化(这就是它使用model.parameters()的原因)。 Normalization takes a value as input and produces a normalized value as output.归一化将一个值作为输入,并生成一个归一化的值作为 output。 Check this for weights normalization: https://pytorch.org/docs/master/generated/torch.nn.utils.weight_norm.html检查此权重标准化: https://pytorch.org/docs/master/generated/torch.nn.utils.weight_norm.html

On the other hand, L1 Loss it is just a way to determine how 2 values differ from each other, so the "loss" is just measure of this difference.另一方面,L1 Loss 只是一种确定 2 个值如何彼此不同的方法,因此“损失”只是衡量这种差异的方法。 In the case of L1 Loss this error is computed with the Mean Absolute Error loss = |xy|在 L1 Loss 的情况下,这个误差是用平均绝对误差loss = |xy|计算的。 where x and y are the values to compare.其中xy是要比较的值。 So error compute takes 2 values as input and produces a value as output.因此错误计算将 2 个值作为输入并产生一个值 output。 Check this for loss computing: https://pytorch.org/docs/master/generated/torch.nn.L1Loss.html检查此损失计算: https://pytorch.org/docs/master/generated/torch.nn.L1Loss.html

To answer your question: no, the above snippets are not equivalent, since the first is trying to do weights normalization and the second one, you are trying to compute a loss.回答你的问题:不,上面的片段不等价,因为第一个片段试图进行权重归一化,第二个片段试图计算损失。 This would be the loss computing with some context:这将是具有某些上下文的损失计算:

sample, target = dataset[i]
target_predicted = model(sample)
loss = torch.nn.L1Loss()
loss_value = loss(target, target_predicted)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM