简体   繁体   English

“太早”早停在喀拉斯

[英]"Too early" early stopping in Keras

I'm training a neural network with Keras, using early stopping.我正在用 Keras 训练一个神经网络,使用早期停止。 However, when training, the network very early hits a point where the validation loss is unnaturally low, which flattens out after a while, like this.然而,在训练时,网络很早就达到了验证损失异常低的点,一段时间后就会变平,就像这样。损失函数

When using early stopping with patience = 50 the validation loss flattens out, but never goes below the validation loss in the beginning.当使用patience = 50的早期停止时,验证损失会趋于平缓,但一开始就不会低于验证损失。

I've trained the network multiple times with the same result, with both the rmsprop (with learning rates from 0.1 to 1e-4) and adam optimizers.我用 rmsprop(学习率从 0.1 到 1e-4)和 adam 优化器对网络进行了多次训练,结果相同。

Does anyone know if there is a way to set a "burn in period" (like in a Markov Chain Monte Carlo model) for the network, before monitoring the validation loss for choosing the best model?有谁知道在监控验证损失以选择最佳模型之前是否有办法为网络设置“老化期”(如马尔可夫链蒙特卡洛模型)?

maybe I'm 2/3 years late, but I had the same issue, and I've solved coding this callback:也许我迟到了 2/3 年,但我遇到了同样的问题,我已经解决了这个回调的编码问题:

class DelayedEarlyStopping(tf.keras.callbacks.EarlyStopping):
    def __init__(self, burn_in, **kwargs):
        super(DelayedEarlyStopping, self).__init__(**kwargs)
        self.burn_in = burn_in

    def on_epoch_end(self, epoch, logs=None):
        if epoch >= self.burn_in:
            super().on_epoch_end(epoch, logs)
        else:
            super().on_train_begin(logs=None)

early_stopping_monitor = DelayedEarlyStopping(
    100,
    monitor='val_total_loss',
    min_delta=0,
    patience=20,
    verbose=0,
    mode='auto',
    baseline=40,
    restore_best_weights=True
)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM