[英]How to use mean_squared_error loss in tensorflow session
I am new to tensorflow 我是tensorflow的新手
In a part of a code for a tensorflow session, there is : 在张量流会话的代码的一部分中,有:
loss = tf.nn.softmax_cross_entropy_with_logits_v2(
logits=net, labels=self.out_placeholder, name='cross_entropy')
self.loss = tf.reduce_mean(loss, name='mean_squared_error')
I want to use mean_squared_error
loss function for this purpose. 我想为此使用
mean_squared_error
损失函数。 I found this loss function in tensorflow website: 我在Tensorflow网站上发现了这种损失功能:
tf.losses.mean_squared_error(
labels,
predictions,
weights=1.0,
scope=None,
loss_collection=tf.GraphKeys.LOSSES,
reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)
I need this loss function for a regression problem. 对于回归问题,我需要此损失函数。
I tried: 我试过了:
loss = tf.losses.mean_squared_error(predictions=net, labels=self.out_placeholder)
self.loss = tf.reduce_mean(loss, name='mean_squared_error')
Where net = tf.matmul(input_tensor, weights) + biases
其中
net = tf.matmul(input_tensor, weights) + biases
However, I'm not sure if it's the correct way. 但是,我不确定这是否正确。
First of all keep in mind that cross-entropy is mainly used for classification, while MSE is used for regression. 首先请记住,交叉熵主要用于分类,而MSE用于回归。
In your case cross entropy measures the difference between two distributions (the real occurences, called labels - and your predictions) 在您的情况下,交叉熵可衡量两种分布之间的差异(实际发生的情况,称为标签-和您的预测)
So while the first loss functions works on the result of the softmax layer (which can be seen as a probability distribution), the second one works directly on the floating point output of your network (which is no probability distribution) - therefore they cannot be simply exchanged. 因此,虽然第一个损失函数作用于softmax层的结果(可以看作是概率分布),但是第二个损失函数直接作用于网络的浮点输出(没有概率分布),因此不能简单地交换。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.