简体   繁体   English

Keras 模型的每个(最后一层)输出的单独损失

[英]Individual loss of each (final-layer) output of Keras model

When training a ANN for regression, Keras stores the train/validation loss in a History object .在训练 ANN 进行回归时,Keras 将训练/验证损失存储在History 对象中 In the case of multiple outputs in the final layer with a standard loss function , ie the Mean Squared Error or MSE:在具有标准损失函数的最后一层多个输出的情况下,即均方误差或 MSE:

  • what does the loss represent in the multi-output scenario?在多输出场景中损失代表什么? Is it the average/mean of the individual losses of all outputs or is it something else?它是所有输出的单个损失的平均值还是其他东西?
  • Can I somehow access the loss of each output individually without implementing a custom loss function?我可以在不实现自定义损失函数的情况下以某种方式单独访问每个输出的损失吗?

Any hints would be much appreciated.任何提示将不胜感激。

EDIT------------编辑 - - - - - -

model = Sequential()
model.add(LSTM(10, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(2))
model.compile(loss='mse', optimizer='adam')

Re-phrasing my question after adding the snippet:添加片段后重新表述我的问题:

How is the loss calculated in the case of two neurons in the output layer and what does the resulting loss represent?在输出层有两个神经元的情况下如何计算损失,由此产生的损失代表什么? Is it the average loss for both outputs?它是两个输出的平均损失吗?

The standard MSE loss is implemented in Keras as follows:标准 MSE 损失在 Keras 中实现如下:

def mse_loss(y_true, y_pred):
    return K.mean(K.square(y_pred - y_true), axis=-1)

If you now have multiple neurons at the output layer, the computed loss will simply be the mean of the squared-loss of all individual neurons.如果您现在在输出层有多个神经元,则计算出的损失将只是所有单个神经元的平方损失的平均值。

If you want the loss of each individual output to be tracked you have to write an own metric for that.如果您希望跟踪每个单独输出的损失,您必须为此编写一个自己的指标。 If you want to keep it as simple as possible you can just use following metric (it has to be nested since Keras only allows a metric to have inputs y_true and y_pred):如果你想让它尽可能简单,你可以使用以下指标(它必须嵌套,因为 Keras 只允许一个指标具有输入 y_true 和 y_pred):

def inner_part_custom_metric(y_true, y_pred, i):
    d = y_pred-y_true
    square_d = K.square(d)
    return square_d[:,i] #y has shape [batch_size, output_dim]

def custom_metric_output_i(i):
    def custom_metric_i(y_true, y_pred):
        return inner_part_custom_metric(y_true, y_pred, i)
    return custom_metric_i

Now, say you have 2 output neurons.现在,假设您有 2 个输出神经元。 Create 2 instances of this metric:创建此指标的 2 个实例:

metrics = [custom_metric_output_i(0), custom_metric_output_i(1)]

Then compile your model as follows:然后按如下方式编译您的模型:

model = ...
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=metrics)
history = model.fit(...)

Now you can access the loss of each individual neuron in the history object.现在您可以访问历史对象中每个单独神经元的损失。 Use following to command to see what's in the history object:使用以下命令查看历史对象中的内容:

print(history.history.keys())
print(history.history.keys())

and then:进而:

print(history.history['custom_metric_i'])

like stated before, will actually print the history for only one dimension!如前所述,实际上只会打印一个维度的历史!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM