简体   繁体   English

带有纪元号的一行上的 Keras 训练进度条

[英]Keras training progress bar on one line with epoch number

When I use Keras to train a model with model.fit() , I see a progress bar that looks like this:当我使用model.fit()model.fit()训练模型时,我看到一个如下所示的进度条:

Epoch 1/10
8000/8000 [==========] - 55s 7ms/step - loss: 0.9318 - acc: 0.0783 - val_loss: 0.8631 - val_acc: 0.1180
Epoch 2/10
8000/8000 [==========] - 55s 7ms/step - loss: 0.6587 - acc: 0.1334 - val_loss: 0.7052 - val_acc: 0.1477
Epoch 3/10
8000/8000 [==========] - 54s 7ms/step - loss: 0.5701 - acc: 0.1526 - val_loss: 0.6445 - val_acc: 0.1632

To improve readability, I would like to have the epoch number on the same line as the progress bar, like this:为了提高可读性,我希望将纪元编号与进度条位于同一行,如下所示:

Epoch 1/10: 8000/8000 [==========] - 55s 7ms/step - loss: 0.9318 - acc: 0.0783 - val_loss: 0.8631 - val_acc: 0.1180
Epoch 2/10: 8000/8000 [==========] - 55s 7ms/step - loss: 0.6587 - acc: 0.1334 - val_loss: 0.7052 - val_acc: 0.1477
Epoch 3/10: 8000/8000 [==========] - 54s 7ms/step - loss: 0.5701 - acc: 0.1526 - val_loss: 0.6445 - val_acc: 0.1632

How can I make that change?我怎样才能做出这样的改变? I know that Keras has callbacks that can be invoked during training, but I am not familiar with how that works.我知道 Keras 有可以在训练期间调用的回调,但我不熟悉它是如何工作的。

If you want to use an alternative, you could use tqdm (version >= 4.41.0):如果您想使用替代方案,您可以使用tqdm (版本 >= 4.41.0):

from tqdm.keras import TqdmCallback
...
model.fit(..., verbose=0, callbacks=[TqdmCallback(verbose=2)])

This turns off keras ' progress ( verbose=0 ), and uses tqdm instead.这会关闭keras的进度( verbose=0 ),并使用tqdm代替。 For the callback, verbose=2 means separate progressbars for epochs and batches.对于回调, verbose=2表示 epochs 和 batches 的单独进度条。 1 means clear batch bars when done. 1表示完成后清除批处理条。 0 means only show epochs (never show batch bars). 0表示只显示纪元(从不显示批次条)。

Yes, you can use callbacks ( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback ).是的,您可以使用回调( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback )。 For example:例如:

import tensorflow as tf

class PrintLogs(tf.keras.callbacks.Callback):
    def __init__(self, epochs):
        self.epochs = epochs

    def set_params(self, params):
        params['epochs'] = 0

    def on_epoch_begin(self, epoch, logs=None):
        print('Epoch %d/%d' % (epoch + 1, self.epochs), end='')


mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
epochs = 5
model.fit(x_train, y_train,
          epochs=epochs, 
          validation_split=0.2, 
          verbose = 2, 
          callbacks=[PrintLogs(epochs)])

output:输出:

Train on 48000 samples, validate on 12000 samples
Epoch 1/5 - 10s - loss: 0.0306 - acc: 0.9901 - val_loss: 0.0837 - val_acc: 0.9786
Epoch 2/5 - 9s - loss: 0.0269 - acc: 0.9910 - val_loss: 0.0839 - val_acc: 0.9788
Epoch 3/5 - 9s - loss: 0.0253 - acc: 0.9915 - val_loss: 0.0895 - val_acc: 0.9781
Epoch 4/5 - 9s - loss: 0.0201 - acc: 0.9930 - val_loss: 0.0871 - val_acc: 0.9792
Epoch 5/5 - 9s - loss: 0.0206 - acc: 0.9931 - val_loss: 0.0917 - val_acc: 0.9793

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM