简体   繁体   English

Keras MeanSquaredError 计算每个样本的损失

[英]Keras MeanSquaredError calculate loss per individual sample

I'm trying to get the MeanSquaredError of each individal sample in my tensors.我正在尝试获取张量中每个单独样本的 MeanSquaredError。

Here is some sample code to show my problem.这是一些示例代码来显示我的问题。

src = np.random.uniform(size=(2, 5, 10))
tgt = np.random.uniform(size=(2, 5, 10))
srcTF = tf.convert_to_tensor(src)
tgtTF = tf.convert_to_tensor(tgt)
print(srcTF, tgtTF)

lf = tf.keras.losses.MeanSquaredError(reduction=tf.compat.v1.losses.Reduction.NONE)

flowResults = lf(srcTF, tgtTF)
print(flowResults)

Here are the results:以下是结果:

(2, 5, 10) (2, 5, 10)
(2, 5)

I want to keep all the original dimensions of my tensors, and just calculate loss on the individual samples.我想保留张量的所有原始尺寸,只计算单个样本的损失。 Is there a way to do this in Tensorflow? Tensorflow有没有办法做到这一点? Note that pytorch's torch.nn.MSELoss(reduction = 'none') does exactly what I want, so is there an alternative that's more like that?请注意,pytorch 的 torch.nn.MSELoss(reduction = 'none') 完全符合我的要求,那么有没有更像这样的替代方案?

Here is a way to do it:这是一种方法:

[ins] In [97]: mse = tf.keras.losses.MSE(tf.expand_dims(srcTF, axis=-1) , tf.expand_dims(tgtTF, axis=-1))                                                                 
                                                                                                                                                                            
[ins] In [98]: mse.shape                                                                                                                                                    
Out[98]: TensorShape([2, 5, 10])       

I think the key here is samples.我认为这里的关键是样本。 Since MSE is being computed on the last axis, you lose that axis as that's what's being "reduced".由于 MSE 是在最后一个轴上计算的,因此您会丢失该轴,因为它正在“减少”。 Each point in that five dimensional vector represents the mean squared error of the 10 dimensions in the last axis.该五维向量中的每个点代表最后一个轴中 10 个维度的均方误差。 So in order to get back the original shape, essentially, we have to do the MSE of each scalar, for which we need to expand the dimensions.因此,为了恢复原始形状,本质上,我们必须对每个标量进行 MSE,为此我们需要扩展维度。 Essentially, we are saying that (2, 5, 10) is the number of batches we have, and each scalar is our sample/prediction, which is what tf.expand_dims(<tensor>, -1) accomplishes.本质上,我们说 (2, 5, 10) 是我们拥有的批次数,每个标量是我们的样本/预测,这就是 tf.expand_dims(<tensor>, -1) 完成的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM