简体   繁体   English

权衡真阳性与真阴性

[英]weighting true positives vs true negatives

This loss function in tensorflow is used as a loss function in keras/tensorflow to weight binary decisions 张量流中的这个损失函数被用作keras /张量流中的损失函数来加权二元决策

It weights false positives vs false negatives: 权衡误报与误报:

targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits)) 目标* -log(sigmoid(logits))+(1-目标)* -log(1-sigmoid(logits))

The argument pos_weight is used as a multiplier for the positive targets: 参数pos_weight用作正目标的乘数:

targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits)) 目标* -log(sigmoid(logits))* pos_weight +(1-目标)* -log(1-sigmoid(logits))

Does anybody have any suggestions how in addition true positives could be weighted against true negatives if the loss/reward of them should not have an equal weight? 如果有人的损失/报酬不应具有相等的权重,那么是否有人建议将真实的正数与真实的负数相加权?

First, note that with cross entropy loss, there is some (possibly very very small) penalty to each example (even if correctly classified). 首先,请注意,由于交叉熵损失,每个示例(即使正确分类)也会受到一些(可能非常小)损失。 For example, if the correct class is 1 and our logit is 10, the penalty will be 例如,如果正确的类别是1,而我们的logit是10,则惩罚为

-log(sigmoid(10)) = 4*1e-5

This loss (very slightly) pushes the network to produce even higher logit for this case to get its sigmoid even closer to 1. Similarly, for negative class, even if the logit is -10, the loss will push it to be even more negative. 这种损失(非常轻微)会导致网络产生更高的logit,使这种情况的乙状结肠更接近1。类似地,对于负数类别,即使logit为-10,损失也会使它变得更加负数。 。

This is usually fine because the loss from such terms is very small. 通常很好,因为此类条款造成的损失很小。 If you would like your network to actually achieve zero loss, you can use label_smoothing . 如果您希望网络实际达到零损失,则可以使用label_smoothing This is probably as close to "rewarding" the network as you can get in the classic setup of minimizing loss (you can obviously "reward" the network by adding some negative number to the loss. That won't change the gradient and training behavior though). 在经典的将损失最小化的设置中,这可能与“奖励”网络非常接近(显然,可以通过向损失中添加一些负数来“奖励”网络。这不会改变梯度和训练行为虽然)。

Having said that, you can penalize the network differently for various cases - tp, tn, fp, fn - similarly to what is described in Weight samples if incorrect guessed in binary cross entropy . 话虽如此, 如果在二进制交叉熵中猜测不正确 ,则可以针对各种情况(tp,tn,fp,fn)对网络进行不同的惩罚,类似于权重样本中的描述。 (It seems like the implementation there is actually incorrect. You want to use corresponding elements of the weight_tensor to weight individual log(sigmoid(...)) terms, not the final output of cross_entropy ). (似乎实现实际上是不正确的。您想使用weight_tensor相应元素来加权各个log(sigmoid(...))项,而不是cross_entropy的最终输出)。

Using this scheme, you might want to penalize very wrong answers much more than almost right answers. 使用此方案,您可能想要惩罚非常错误的答案,而不是几乎正确的答案。 However, note that this is already happening to a degree because of the shape of log(sigmoid(...)) . 但是,请注意,由于log(sigmoid(...))的形状,这种情况已经在某种程度上发生了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM