简体   繁体   中英

Keras - accuracy for multi-output model is not working

An important concern in the case of multi-output models is that the training of such a model requires the ability to specify different metrics for different heads (outputs) of the network.

As mentioned in the official documentation:

To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy'}

For my model, I am doing something similar with the following :

metrics ={'output_a': 'crossentropy',
          'output_b': 'mse',
          'output_c': 'mse',
          'output_d': 'mse',
          'output_e': 'categorical_accuracy'}

But when I start training the model, the overall accuracy is nowhere to be seen at the logs, while the loss and the val_loss are visible.

So my questions are:

  1. Do val and val_loss imply the overall loss and overall validation loss of the model respectively?
  2. Is it possible to have the acc of the model printed as well?
  1. Do loss and val_loss imply the overall loss and overall validation loss of the model respectively?

    Yes, they are the training and validation overall losses. The individual losses for each output are weighted according to the coefficients specified in loss_weights .

  2. Is it possible to have the accuracy of the model printed as well?

    You can have the accuracy for each output individually, but I believe Keras doesn't support "overall" metrics. This would require more information on how the individual outputs' metrics should be aggregated.

I will answer the 2nd part as 1st is already answered.

Yes, we can print the validation accuracy by creating a custom callback and overriding the on_epoch_end function. In the on_epoch_end we can access the logs which is the dictionary of metric_name: values.

For example -

I have a 13 output model.

    class CustomCallbacks(tf.keras.callbacks.Callback):
      def on_epoch_end(self, epoch, logs={}):
        val_acc = 0
        for i in range(13):
          val_acc += (logs.get('val_digit_{}_accuracy'.format(i)))/13 # take mean acc value
        print("mean val acc -  ",val_acc)

If all you want is to track down those custom metrics, I've managed to make it work relatively simply by inheriting from ModelCheckpoint

class ModdedModelCheckpoint(keras.callbacks.ModelCheckpoint):
    def on_epoch_end(self, epoch, logs={}):
        relevantAcc = list(availableOutputs.keys())
        accuracies = [logs[f"val_y_{k}_accuracy"] for k in relevantAcc]
        print(f"Relevant_Accuracies: {accuracies}")
        average = sum(accuracies) / len(relevantAcc)
        print(f"Average Accuracies: {average}")
        logs["val_y_accuracy"] = average
        super(keras.callbacks.ModelCheckpoint, self).on_epoch_end(
            epoch, logs=logs
        )

In this case, it decides whether to store the best model based on my "fake" val_y_accuracy entry to the logs

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM