简体   繁体   中英

Individual loss of each (final-layer) output of Keras model

When training a ANN for regression, Keras stores the train/validation loss in a History object . In the case of multiple outputs in the final layer with a standard loss function , ie the Mean Squared Error or MSE:

  • what does the loss represent in the multi-output scenario? Is it the average/mean of the individual losses of all outputs or is it something else?
  • Can I somehow access the loss of each output individually without implementing a custom loss function?

Any hints would be much appreciated.

EDIT------------

model = Sequential()
model.add(LSTM(10, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(2))
model.compile(loss='mse', optimizer='adam')

Re-phrasing my question after adding the snippet:

How is the loss calculated in the case of two neurons in the output layer and what does the resulting loss represent? Is it the average loss for both outputs?

The standard MSE loss is implemented in Keras as follows:

def mse_loss(y_true, y_pred):
    return K.mean(K.square(y_pred - y_true), axis=-1)

If you now have multiple neurons at the output layer, the computed loss will simply be the mean of the squared-loss of all individual neurons.

If you want the loss of each individual output to be tracked you have to write an own metric for that. If you want to keep it as simple as possible you can just use following metric (it has to be nested since Keras only allows a metric to have inputs y_true and y_pred):

def inner_part_custom_metric(y_true, y_pred, i):
    d = y_pred-y_true
    square_d = K.square(d)
    return square_d[:,i] #y has shape [batch_size, output_dim]

def custom_metric_output_i(i):
    def custom_metric_i(y_true, y_pred):
        return inner_part_custom_metric(y_true, y_pred, i)
    return custom_metric_i

Now, say you have 2 output neurons. Create 2 instances of this metric:

metrics = [custom_metric_output_i(0), custom_metric_output_i(1)]

Then compile your model as follows:

model = ...
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=metrics)
history = model.fit(...)

Now you can access the loss of each individual neuron in the history object. Use following to command to see what's in the history object:

print(history.history.keys())
print(history.history.keys())

and then:

print(history.history['custom_metric_i'])

like stated before, will actually print the history for only one dimension!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM