简体   繁体   中英

Keras MSE Loss with Two Outputs

I have a model whose output layer is Dense(2) so my output is a list of 2 floats.

I found a similar example on Keras documentation

>>> y_true = [[0., 1.], [0., 0.]]
>>> y_pred = [[1., 1.], [1., 0.]]
>>> # Using 'auto'/'sum_over_batch_size' reduction type.  
>>> mse = tf.keras.losses.MeanSquaredError()
>>> mse(y_true, y_pred).numpy()
0.5

Based on the output of the example, I think it computes the MSE like this

first_MSE = mse(y_true[0], y_pred[0])
second_MSE = mse(y_true[1], y_pred[1])
mse = (first_MSE + second_MSE) / 2

Doing the above I get 0.5 as in the example. Is that what it really happens under the hood?

Yes, MeanSquaredError is first computed as the mean over the last axis and then the mean over the batch.

Mean over last axis of the squared difference is computed here: https://github.com/tensorflow/tensorflow/blob/7ec285825c713af9bc741b8b65d09dd160ec8806/tensorflow/python/keras/losses.py#L1213

The class MeanSquaredError uses the default reduction which does the mean over the batch losses, here: https://github.com/tensorflow/tensorflow/blob/7ec285825c713af9bc741b8b65d09dd160ec8806/tensorflow/python/keras/utils/losses_utils.py#L264

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM