[英]How to print one log line per every 10 epochs when training models with tensorflow keras?
當我擬合模型時:
model.fit(X, y, epochs=40, batch_size=32, validation_split=0.2, verbose=2)
它為每個 epoch 打印一個日志行:
Epoch 1/100
0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750
Epoch 2/100
0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250
Epoch 3/100
0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250
.....
如何每 10 個 epoch 打印日志行,如下所示?
Epoch 10/100
0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750
Epoch 20/100
0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250
Epoch 30/100
0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250
.....
此回調將創建並在日志文本文件中寫入您想要的內容:
log_path = "text_file_name.txt" # it will be created automatically
class print_training_on_text_every_10_epochs_Callback(Callback):
def __init__(self, logpath):
self.logpath = logpath
def on_epoch_end(self, epoch, logs=None):
with open(self.logpath, 'a') as writefile: # put log_path here
with redirect_stdout(writefile):
if(int(epoch) % 10) == 0:
print("Epoch: {:>3} | Loss: ".format(epoch) + f"{logs['loss']:.4e}" + " | Valid loss: " + f"{logs['val_loss']:.4e}")
writefile.write("\n")
my_callbacks = [
print_training_on_text_every_10_epochs_Callback(logpath=log_path),
]
你想這樣稱呼它。
model.fit(training_dataset, epochs=60, validation_data=validation_dataset, callbacks=my_callbacks)
文本文件將僅在 10 個 epoch 過去后更新
這就是我在文本文件中得到的
Epoch: 0 | Loss: 5.3454e+00 | Valid loss: 4.2420e-01
Epoch: 10 | Loss: 3.1342e-02 | Valid loss: 3.4554e-02
Epoch: 20 | Loss: 1.6330e-02 | Valid loss: 2.2512e-02
第一個時期編號為 0,第二個時期編號為 1,依此類推。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.