[英]Learning Rate Callback on Step Rather Than Epoch?
I have been trying to write a custom learning rate scheduler that updates every step rather than every epoch.我一直在尝试编写一个自定义学习率调度程序,它更新每一步而不是每个时期。
I have managed to implement the following learning rate scheduler that updates every epoch, yet cant figure out how to update learning rate every step?我已经设法实现了以下学习率调度程序,它更新每个时期,但无法弄清楚如何每一步更新学习率?
LR_START = 0.00001
LR_MAX = 0.0001
LR_MIN = 0.00001
LR_RAMPUP_EPOCHS = 3
LR_SUSTAIN_EPOCHS = 0
WARMUP_STEPS = LR_RAMPUP_EPOCHS * (NUM_TRAINING_IMAGES//BATCH_SIZE)
TOTAL_STEPS = EPOCHS * (NUM_TRAINING_IMAGES//BATCH_SIZE)
def lrfn_epoch(epoch):
if epoch < LR_RAMPUP_EPOCHS:
lr = (LR_MAX - LR_START) / LR_RAMPUP_EPOCHS * epoch + LR_START
elif epoch < LR_RAMPUP_EPOCHS + LR_SUSTAIN_EPOCHS:
lr = LR_MAX
else:
#cosine decay
progress = (epoch - LR_RAMPUP_EPOCHS) / (EPOCHS - LR_RAMPUP_EPOCHS)
lr = LR_MAX * (0.5 * (1.0 + tf.math.cos(np.pi * ((1.0 * progress) % 1.0))))
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn_epoch, verbose = True)
This is the function I have to update the learning rate on every step.这是 function 我必须在每一步更新学习率。
def lrfn_step(step):
if step < WARMUP_STEPS:
lr = (LR_MAX - LR_START) / WARMUP_STEPS * step + LR_START
else:
progress = (step - WARMUP_STEPS) / (TOTAL_STEPS - WARMUP_STEPS)
lr = LR_MAX * (0.5 * (1.0 + tf.math.cos(np.pi * ((1.0 * progress) % 1.0))))
return lr
I found the answer!我找到了答案!
Here is the code to write a callback that updates on each step rather than each epoch.这是编写在每个步骤而不是每个时期更新的回调的代码。
class CustomCallback(keras.callbacks.Callback):
def __init__(self, schedule):
super(CustomCallback, self).__init__()
self.schedule = schedule
self.epoch = 0
def on_train_batch_begin(self, batch, logs=None):
actual_step = (self.epoch*STEPS_PER_EPOCH) + batch
# Call schedule function to get the scheduled learning rate.
scheduled_lr = self.schedule(actual_step)
# Set the value back to the optimizer before this epoch starts
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
if batch == 0:
print("--Learning Rate: {:.6f} --".format(scheduled_lr))
def on_epoch_end(self, epoch, logs=None):
self.epoch+=1
And here is how to use it when fitting the model.这里是安装 model 时的使用方法。
history = model.fit( ..
..
callbacks = [CustomCallback(lrfn_step)],
..
)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.