简体   繁体   English

有没有办法在使用 tensorflow 的 epoch 中停止训练?

[英]Is there a way to stop training in the middle of an epoch with tensorflow?

Just wondering if there is a way to save the highest accuracy and lowest loss in the middle of an epoch and using that as the score moving forward for the next epoch.只是想知道是否有一种方法可以在一个时期的中间保存最高的准确度和最低的损失,并将其用作下一个时期前进的分数。 Normally my data is maxed out at 43.56% accuracy but I've seen it go all the way up to above 46% in the middle of an epoch.通常我的数据最高准确率为 43.56%,但我已经看到它在一个时期的中间一直上升到 46% 以上。 Is there a way I can stop the epoch at that point and use that for the score to beat moving forward?有没有办法我可以在那个时候停止时代并使用它来击败前进的分数?

Here's the code that I'm running right now这是我现在正在运行的代码

import pandas as pd
import numpy as np
import pickle
import random
from skopt import BayesSearchCV
from sklearn.neural_network import MLPRegressor
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, Bidirectional, SimpleRNN, GRU
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from tensorflow.keras import layers
import tensorflow_docs as tfdocs
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import tensorflow_docs.modeling
from tensorflow import keras
import warnings
warnings.filterwarnings("ignore")
warnings.filterwarnings('ignore', category=DeprecationWarning)

train_df = avenues[["LFA's", "Spend"]].sample(frac=0.8,random_state=0)
test_df = avenues[["LFA's", "Spend"]].drop(train_df.index)
train_df = clean_dataset(train_df)
test_df = clean_dataset(test_df)
train_df = train_df.reset_index(drop=True)
test_df = test_df.reset_index(drop=True)
train_stats = train_df.describe()
train_stats = train_stats.pop("LFA's")
train_stats = train_stats.transpose()
train_labels = train_df.pop("LFA's").values
test_labels = test_df.pop("LFA's").values
normed_train_data = np.array(norm(train_df)).reshape((train_df.shape[0], 1, 1))
normed_test_data = np.array(norm(test_df)).reshape((test_df.shape[0], 1, 1))
model = KerasRegressor(build_fn=build_model, epochs=25, 
                                   batch_size=1, verbose=0)
gs = BayesSearchCV(model, param_grid, cv=3, n_iter=25, n_jobs=1,
                               optimizer_kwargs={'base_estimator': 'RF'},
                               fit_params={"callbacks": [es_acc, es_loss, tfdocs.modeling.EpochDots()]})
try:
     gs.fit(normed_train_data, train_labels)
except Exception as e:
     print(e)

Try to use train_on_batch instead of fit .尝试使用train_on_batch而不是fit In this way, you can control inside your epoch to stop after each batch you would like ( Although as a machine learning point of view, I have doubts if it is a good idea, and you'll get the less generalized model at the end).通过这种方式,您可以控制您的时代在您想要的每个批次之后停止(尽管从机器学习的角度来看,我怀疑这是否是一个好主意,并且您最终会得到不太通用的模型)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM