简体   繁体   English

如何在pytorch中实现可微的汉明损失?

[英]How to implement differentiable hamming loss in pytorch?

How to implement a differentiable loss function that counts the number of wrong predictions?如何实现计算错误预测数量的可微损失函数?

output = [1,0,4,10]
target = [1,2,4,15]
loss = np.count_nonzero(output != target) / len(output) # [0,1,0,1] -> 2 / 4 -> 0.5

在此处输入图片说明

I have tried a few implementations but they are not differentiable.我已经尝试了一些实现,但它们是不可区分的。 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

def hamming_loss(output, target):
  #loss = torch.tensor(torch.nonzero(output != target).size(0)).double() / target.size(0)
  #loss = torch.sum((output != target), dim=0).double() / target.size(0)
  loss = torch.mean((output != target).double())
  return loss

Maybe there is some similar but differential loss function?也许有一些相似但不同的损失函数?

Why don't you convert your discrete predictions (eg, [1, 0, 4, 10] ) with "soft" predictions, ie probability of each label (eg, output becomes a 4x(num labels) probability vectors).为什么不将离散预测(例如[1, 0, 4, 10] )转换为“软”预测,即每个标签的概率(例如, output变为 4x(num labels) 概率向量)。
Once you have "soft" predictions, you can compute the cross entropy loss between the predicted output probabilities and the desired targets.一旦有了“软”预测,就可以计算预测输出概率和所需目标之间的交叉熵损失。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM