简体   繁体   English

Tensorflow 一种用于多输出模型的自定义指标

[英]Tensorflow one custom metric for multioutput models

I can't find the info in the documentation so I am asking here.我在文档中找不到信息,所以我在这里问。

I have a multioutput model with 3 different outputs:我有一个具有 3 个不同输出的多输出模型:

model = tf.keras.Model(inputs=[input], outputs=[output1, output2, output3])

The predicted labels for validation are constructed from these 3 outputs to form only one, it's a post-processing step.用于验证的预测标签由这 3 个输出构成,仅形成一个,这是一个后处理步骤。 The dataset used for training is a dataset of those 3 intermediary outputs, for validation I evaluate on a dataset of labels instead of the 3 kind of intermediary data.用于训练的数据集是这 3 个中间输出的数据集,为了验证,我评估了标签数据集而不是 3 种中间数据。

I would like to evaluate my model using a custom metric that handle the post processing and comparaison with the ground truth.我想使用自定义指标来评估我的模型,该指标处理后处理并与基本事实进行比较。

My question is , in the code of the custom metric, will y_pred be a list of the 3 outputs of the model?我的问题是,在自定义指标的代码中, y_pred会是模型的 3 个输出的列表吗?

class MyCustomMetric(tf.keras.metrics.Metric):

  def __init__(self, name='my_custom_metric', **kwargs):
    super(MyCustomMetric, self).__init__(name=name, **kwargs)

  def update_state(self, y_true, y_pred, sample_weight=None):
    # ? is y_pred a list [batch_output_1, batch_output_2, batch_output_3] ? 

  def result(self):
    pass 

# one single metric handling the 3 outputs?
model.compile(optimizer=tf.compat.v1.train.RMSPropOptimizer(0.01),
              loss=tf.keras.losses.categorical_crossentropy,
              metrics=[MyCustomMetric()])

With your given model definition, this is a standard multi-output Model.根据您给定的模型定义,这是一个标准的多输出模型。

model = tf.keras.Model(inputs=[input], outputs=[output_1, output_2, output_3])

In general, all (custom) Metrics as well as (custom) Losses will be called on every output separately (as y_pred)!通常,所有(自定义)指标以及(自定义)损失将分别在每个输出上调用(如 y_pred)! Within the loss/metric function you will only see one output together with the one corresponding target tensor.在损失/度量函数中,您只会看到一个输出以及一个相应的目标张量。 By passing a list of loss functions (length == number of outputs of your model) you can specifiy which loss will be used for which output:通过传递损失函数列表(长度 == 模型的输出数量),您可以指定哪个损失将用于哪个输出:

model.compile(optimizer=Adam(), loss=[loss_for_output_1, loss_for_output_2, loss_for_output_3], loss_weights=[1, 4, 8])

The total loss (which is the objective function to minimize) will be the additive combination of all losses multiplied with the given loss weights.总损失(即要最小化的目标函数)将是所有损失乘以给定损失权重的加法组合。

It is almost the same for the metrics!指标几乎相同! Here you can pass (as for the loss) a list (lenght == number of outputs) of metrics and tell Keras which metric to use for which of your model outputs.在这里,您可以传递(至于损失)一个指标列表(长度 == 输出数量),并告诉 Keras 将哪个指标用于您的哪个模型输出。

model.compile(optimizer=Adam(), loss='mse', metrics=[metrics_for_output_1, metrics_for_output2, metrics_for_output3])

Here metrics_for_output_X can be either a function or a list of functions, which all be called with the one corresponding output_X as y_pred.这里的metrics_for_output_X 可以是一个函数,也可以是一个函数列表,它们都被调用,其中一个对应的output_X 作为y_pred。

This is explained in detail in the documentation of Multi-Output Models in Keras.这在 Keras 中的多输出模型文档中有详细解释。 They also show examples for using dictionarys (to map loss/metric functions to a specific output) instead of lists.他们还展示了使用字典(将损失/度量函数映射到特定输出)而不是列表的示例。 https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models

Further information:更多信息:

If I understand you correctly you want to train your model using a loss function comparing the three model outputs with three ground truth values and want to do some sort of performance evaluation by comparing a derived value from the three model outputs and a single ground truth value.如果我理解正确,您想使用损失函数将三个模型输出与三个真实值进行比较来训练您的模型,并希望通过比较来自三个模型输出的派生值和单个真实值来进行某种性能评估. Usually the model gets trained on the same objective it is evaluated on, otherwise you might get poorer results when evaluating your model!通常,模型会根据评估它的相同目标进行训练,否则在评估模型时可能会得到更差的结果!

Anyways... for evaluating your model on a single label I suggest you either:无论如何......为了在单个标签上评估您的模型,我建议您:

1. (The clean solution) 1.(清洁液)

Rewrite your model and incorporate the post-processing steps.重写您的模型并合并后处理步骤。 Add all the necessary operations (as layers) and map those to an auxiliary output.添加所有必要的操作(作为图层)并将它们映射到辅助输出。 For training your model you can set the loss_weight of the auxiliary output to zero.为了训练您的模型,您可以将辅助输出的 loss_weight 设置为零。 Merge your Datasets so you can feed your model the model input, the intermediate target outputs as well as the labels.合并您的数据集,以便您可以为模型提供模型输入、中间目标输出以及标签。 As explained above you can define now a metric comparing the auxiliary model output with the given target labels.如上所述,您现在可以定义一个指标,将辅助模型输出与给定的目标标签进行比较。

2. 2.

Or you train your model and derive the metric eg in a custom Callback by calculating your post-processing steps on the three outputs of model.predict(input).或者,您可以通过在 model.predict(input) 的三个输出上计算您的后处理步骤来训练您的模型并导出度量,例如在自定义回调中。 This will make it necessary to write custom summaries if you want to track those values in your tensorboard!如果您想在张量板中跟踪这些值,则需要编写自定义摘要! That's why I would not recommend this solution.这就是我不推荐此解决方案的原因。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM