简体   繁体   English

Keras 的 MSE 损失函数显示与 Tensorflow 的 MSE 度量不同的输出?

[英]MSE loss function of Keras shows different output then MSE metric of Tensorflow?

I am training a convolutional network with continuous output in the last layer.我正在训练一个在最后一层具有连续输出的卷积网络。 The last layer has 4 nodes.最后一层有4个节点。 I am using the Mean Squared Error as a loss function.我使用均方误差作为损失函数。 As a check I used the Mean Squared Error from Tensorflow.作为检查,我使用了 Tensorflow 的均方误差。 This gave only the same results for the first batch of the first epoch.对于第一个时期的第一批,这仅给出了相同的结果。 Therefore my question is why do these differ?因此我的问题是为什么这些不同? I used convolutional layers with max pooling and in the end I flattened it and used dropout.我使用了具有最大池化的卷积层,最后我将其展平并使用了 dropout。

Moreover, I was also wondering how is the Mean Squared Error computed for 4 nodes?此外,我还想知道 4 个节点的均方误差是如何计算的? Is it just summing the Mean Squared Error of each node?它只是对每个节点的均方误差求和吗? Cause when I calculate the Mean Squared Error per node there is not a clear connection.原因当我计算每个节点的均方误差时,没有明确的联系。

This is the metric.这是指标。

def loss(y_true, y_pred):
    loss = tf.metrics.mean_squared_error(y_true, y_pred)[1]
    K.get_session().run(tf.local_variables_initializer())
    return loss

And here I compile the model在这里我编译模型

model.compile(loss='mse', optimizer= adam, metrics=[loss, node])

This is how I calculated the Mean squared Error for one node:这是我计算一个节点的均方误差的方法:

def node(y_true, y_pred):
    loss = tf.metrics.mean_squared_error(y_true[:,0], y_pred[:,0])[1]
    K.get_session().run(tf.local_variables_initializer())
    return node

And this is a simplified form of the model:这是模型的简化形式:

    width = height = 128
    model = Sequential()

    model.add(Convolution2D(filters=64, kernel_size=(5, 5), activation='relu', padding='same',
                            input_shape=(width, height, 1)))
     model.add(MaxPooling2D(pool_size=(3, 3)))
    model.add(Flatten())
    model.add(Dense(units=256, activation='relu'))
    model.add(Dropout(0.4))
    model.add(Dense(units=4, activation='linear'))

    adam = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0) 
    model.compile(loss='mse', optimizer= adam, metrics=[loss,node])

You are returning the function itself.您正在返回函数本身。

Look at your code:看看你的代码:

def node(y_true, y_pred):
    loss = tf.metrics.mean_squared_error(y_true[:,0], y_pred[:,0])[1]
    K.get_session().run(tf.local_variables_initializer())
    return node # This is a function name. This should be "return loss"

Try correcting this first.首先尝试更正。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM