简体   繁体   English

在 keras 中保存最佳模型

[英]Saving best model in keras

I use the following code when training a model in keras我在 keras 中训练模型时使用以下代码

from keras.callbacks import EarlyStopping

model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))

model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])


model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)

model.predict(X_test)

but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.但最近我想保存最好的训练模型,因为我正在训练的数据在“高 val_loss vs epochs”图中给出了很多峰值,我想使用模型中最好的一个。

Is there any method or function to help with that?有什么方法或功能可以帮助解决这个问题吗?

EarlyStopping and ModelCheckpoint is what you need from Keras documentation. EarlyStoppingModelCheckpoint是您从 Keras 文档中需要的。

You should set save_best_only=True in ModelCheckpoint.您应该在 ModelCheckpoint 中设置save_best_only=True If any other adjustments needed, are trivial.如果需要任何其他调整,都是微不足道的。

Just to help you more you can see a usage here on Kaggle .为了帮助你更多,你可以在 Kaggle 上看到一个用法。


Adding the code here in case the above Kaggle example link is not available:如果上面的 Kaggle 示例链接不可用,请在此处添加代码:

model = getModel()
model.summary()

batch_size = 32

earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')

model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)

EarlyStopping 's restore_best_weights argument will do the trick: EarlyStoppingrestore_best_weights参数可以解决问题:

restore_best_weights: whether to restore model weights from the epoch with the best value of the monitored quantity. restore_best_weights:是否从监测数量的最佳值的epoch恢复模型权重。 If False, the model weights obtained at the last step of training are used.如果为 False,则使用在训练的最后一步获得的模型权重。

So not sure how your early_stopping_monitor is defined, but going with all the default settings and seeing you already imported EarlyStopping you could do this:所以不确定你的early_stopping_monitor是如何定义的,但是使用所有默认设置并看到你已经导入了EarlyStopping你可以这样做:

early_stopping_monitor = EarlyStopping(
    monitor='val_loss',
    min_delta=0,
    patience=0,
    verbose=0,
    mode='auto',
    baseline=None,
    restore_best_weights=True
)

And then just call model.fit() with callbacks=[early_stopping_monitor] like you already do.然后就像你已经做的那样,用callbacks=[early_stopping_monitor]调用model.fit()

I guess model_2.compile was a typo.我猜model_2.compile是一个错字。 This should help if you want to save the best model wrt to the val_losses -如果您想将最佳模型 wrt 保存到 val_losses,这应该会有所帮助 -

checkpoint = ModelCheckpoint('model-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto')  

model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])

model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[checkpoint], verbose=False)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM