简体   繁体   English

Tensorflow:单个图像上不平衡类的加权sparse_softmax_cross_entropy

[英]Tensorflow: Weighted sparse_softmax_cross_entropy for inbalanced classes across a single image

I'm working on a binary semantic segmentation task where the distribution of one class is very smalls across any input image, hence there are only a few pixels which are labeled. 我正在执行二进制语义分割任务,其中一类在任何输入图像上的分布都非常小,因此仅标记了几个像素。 When using sparse_softmax_cross_entropy the over all error is easily decreased when ignoring this class. 当使用sparse_softmax_cross_entropy时,忽略此类时,很容易减少总体错误。 Now, I'm looking for a way to weight the classes by a coefficient which penalizes missclassifications for the specific class higher compared to the other class. 现在,我正在寻找一种通过加权系数加权类别的方法,该系数可以比其他类别更高地惩罚特定类别的误分类。

The doc of the loss function states: 损失函数的文档指出:

weights acts as a coefficient for the loss. 权重是损失的系数。 If a scalar is provided, then the loss is simply scaled by the given value. 如果提供了标量,则损耗将简单地按给定值缩放。 If weights is a tensor of shape [batch_size], then the loss weights apply to each corresponding sample. 如果权重是形状为[batch_size]的张量,则损失权重将应用于每个相应的样本。

If I understand this correctly, it says that specific sample in a batch get weighted differently compared to others. 如果我正确理解这一点,则表示批次中的特定样品的加权与其他样品相比有所不同。 But this is actually not what I'm looking for. 但这实际上不是我想要的。 Does anyone know how to implement a weighted version of this loss function where the weights scale the importance of a specific class rather than samples? 有谁知道如何实现此损失函数的加权版本,其中权重确定特定类而不是样本的重要性?

To answer my own question: 要回答我自己的问题:

The authors of the U-Net paper used a pre-computed weight-map to handle imbalanced classes. U-Net论文的作者使用预先计算的权重图来处理不平衡的类。

The Institute for Anstronomy of ETH Zurich provided a Tensorflow-based U-Net package which contains a weighted version of the Softmax function (not sparse but they flatten their labels and logits first): 苏黎世联邦理工学院天文研究所提供了一个基于Tensorflow的U-Net软件包 ,其中包含Softmax函数的加权版本(不是稀疏的,但它们首先将其标签扁平化并进行logit):

class_weights = tf.constant(np.array(class_weights, dtype=np.float32))
weight_map = tf.multiply(flat_labels, class_weights)
weight_map = tf.reduce_sum(weight_map, axis=1)
loss_map = tf.nn.softmax_cross_entropy_with_logits_v2(logits=flat_logits, labels=flat_labels)
weighted_loss = tf.multiply(loss_map, weight_map)    
loss = tf.reduce_mean(weighted_loss)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Tensorflow:具有交叉熵损失的加权稀疏softmax - Tensorflow: Weighted sparse softmax with cross entropy loss 如何使用sparse_softmax_cross_entropy_with_logits在tensorflow中实现加权交叉熵损失 - How can I implement a weighted cross entropy loss in tensorflow using sparse_softmax_cross_entropy_with_logits 来自Tensorflow中的sparse_softmax_cross_entropy_with_logits的NaN - NaN from sparse_softmax_cross_entropy_with_logits in Tensorflow 加权稀疏分类交叉熵 - Weighted sparse categorical cross entropy 使用tensorflow后端测试keras中多个类别的加权分类交叉熵 - Testing weighted categorical cross entropy for multiple classes in keras with tensorflow backend `ValueError:无法挤压dim[1],期望维度为1,'sparse_softmax_cross_entropy_loss'得到10 [Tensorflow] - `ValueError: Can not squeeze dim[1], expected a dimension of 1, got 10 for 'sparse_softmax_cross_entropy_loss' [Tensorflow] 张量流中sparse_softmax_cross_entropy_with_logits函数的原始编码在哪里? - Where is the origin coding of sparse_softmax_cross_entropy_with_logits function in tensorflow tensorflow-ValueError:只调用`sparse_softmax_cross_entropy_with_logits` - tensorflow-ValueError: Only call `sparse_softmax_cross_entropy_with_logits` sparse_softmax_cross_entropy_with_logits结果比softmax_cross_entropy_with_logits差 - sparse_softmax_cross_entropy_with_logits results is worse than softmax_cross_entropy_with_logits tf.nn.sparse_softmax_cross_entropy_with_logits的意外输出 - Unexpected output for tf.nn.sparse_softmax_cross_entropy_with_logits
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM