简体   繁体   中英

How can I call a test set at the end of each epoch during the training? I am using tensorflow

I am using Tensorflow-Keras to develop a CNN model in which I have split the data set into train, validation, and test sets. I need to call the test set at the end of each epoch as well as train and validation sets to evaluate the model performance. Below is my code to track train and validation sets.

result_dic = {"epochs": []}
json_logging_callback = LambdaCallback(
                on_epoch_begin=lambda epoch, logs: [learning_rate],
                on_epoch_end=lambda epoch, logs:
                result_dic["epochs"].append({
                    'epoch': epoch + 1, 
                    'acc': str(logs['acc']), 
                    'val_acc': str(logs['val_acc'])
                }))
model.fit(x_train, y_train,
                      validation_data=(x_val, y_val),
                      batch_size=batch_size,
                      epochs=epochs,
                      callbacks=[json_logging_callback])

output:

Epoch 1/5
1/1 [==============================] - 4s 4s/step - acc: 0.8611 - val_acc: 0.8333 

However, I'm not sure how to add the test set to my callback to produce the following output.

Expected output:

Epoch 1/5
1/1 [==============================] - 4s 4s/step - acc: 0.8611 - val_acc: 0.8333  - test_acc: xxx

To display your test accuracy after each epoch, you could customize your fit function to display this metric. Check out this documentation or you could, as shown here , define a simple callback for your test dataset and pass it into your fit function:


model.fit(x_train, y_train,
                      validation_data=(x_val, y_val),
                      batch_size=batch_size,
                      epochs=epochs,
                      callbacks=[json_logging_callback, 
                                 your_test_callback((X_test, Y_test))])

If you want complete flexibility, you could try using a training loop .

Update: Since you want to have a single JSON for all metrics, you should do the following:

Define your TestCallBack and add your test accuracy (and loss if you want) to your logs dictionary:

import tensorflow as tf

class TestCallback(tf.keras.callbacks.Callback):
    def __init__(self, test_data):
        self.test_data = test_data

    def on_epoch_end(self, epoch, logs):
        x, y = self.test_data
        loss, acc = self.model.evaluate(x, y, verbose=0)
        logs['test_accuracy'] = acc

Then add the test accuracy to your results dictionary:

result_dic = {"epochs": []}

json_logging_callback = tf.keras.callbacks.LambdaCallback(
                on_epoch_begin=lambda epoch, logs: [learning_rate],
                on_epoch_end=lambda epoch, logs:
                result_dic["epochs"].append({
                    'epoch': epoch + 1, 
                    'acc': str(logs['accuracy']), 
                    'val_acc': str(logs['val_accuracy']),
                    'test_acc': str(logs['test_accuracy'])
                }))

And then just use both callbacks in your fit function but note the order of the callbacks:

model.fit(x_train, y_train,
                      validation_data=(x_val, y_val),
                      batch_size=batch_size,
                      epochs=epochs,
                      callbacks=callbacks=[TestCallback((x_test, y_test)), json_logging_callback])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM