简体   繁体   中英

How to control verbosity in TensorFlow 2.0

In TensorFlow 1.x I had great freedom in choosing how and when to print accuracy/loss scores during training. Fore example, if I wanted to print training loss every 100 epochs, in a tf.Session() I'd write:

if epoch % 100 == 0:
    print(str(epoch) + '. Training Loss: ' + str(loss))

After the release of TF 2.0 (alpha), it seems that the Keras API forces to stick with its standard output. Is there a way to take that flexibility back?

If you don't use the Keras Model methods ( .fit , .train_on_batch , ...) and you write your own training loop using eager execution (and optionally wrapping it in a tf.function to convert it in its graph representation) you can control the verbosity as you're used to do in 1.x

training_epochs = 10
step = 0
for epoch in range(training_epochs)
    print("starting ",epoch)
    for features, labels in dataset:
        with tf.GradientTape() as tape:
            loss = compute_loss(model(features),labels)
        gradients = tape.gradients(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(gradients, model.trainable_variables))
        step += 1
        if step % 10 == 0:
            # measure other metrics if needed
            print("loss: ", loss)
    print("Epoch ", epoch, " finished.")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM