![](/img/trans.png)
[英]How to change the learning rate based on the previous epoch accuracy using keras I am using an SGD optimizer?
[英]print learning rate evary epoch in sgd
我尝试在小批量梯度下降中打印学习率。 但是Ir在许多时期保持不变(始终为0.10000000149)。 但这被认为可以改变evrery mini-batch。 代码如下:
# set the decay as 1e-1 to see the Ir change between epochs.
sgd = SGD(lr=0.1, decay=1e-1, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
class LossHistory(Callback):
def on_epoch_begin(self, batch, logs={}):
lr=self.model.optimizer.lr.get_value()
print('Ir:', lr)
history=LossHistory()
model.fit(X_train, Y_train,
batch_size= batch_size,
nb_epoch= nb_epoch,
callbacks= [history])
您要打印的是初始学习率,而不是实际计算得出的实际学习率:
lr = self.lr * (1. / (1. + self.decay * self.iterations))
from keras import backend as K
from keras.callbacks import Callback
class SGDLearningRateTracker(Callback):
def on_epoch_end(self, epoch, logs={}):
optimizer = self.model.optimizer
lr = K.eval(optimizer.lr * (1. / (1. + optimizer.decay * optimizer.iterations)))
print('\nLR: {:.6f}\n'.format(lr))
然后在模型中添加回调:
model.fit(X_train, Y_train_cat, nb_epoch=params['n_epochs'], batch_size=params['batch_size'], validation_split=0.1,callbacks=[SGDLearningRateTracker()])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.