简体   繁体   中英

Keras custom optimizer change parameters in batches

I want to custom my own optimizer which will change the learning rate at the end of each batch in keras. At first, I build a custom callback:

class custom_callback(Callback):
  def __init__(self,lr):
    super(op_callback, self).__init__()
    self.lr=lr

  def on_batch_end(self,batch,logs={}):
    sgd = SGD(lr=batch*self.lr)
    self.model.compile(optimizer=sgd,loss='categorical_crossentropy',metrics=['accuracy'])

And then, I copy the SGD optimizer code from doc . Because I want to make sure the learning rate is changed, so I print the learning rate in get_update function.

def get_updates(self, loss, params):
    print(self.lr)
    ...

But it prints the learning rate only once. I've found that the get_update function will be called only at the beginning of build the computation graph. But I still do not understand why it won't print anything even I re-initialize the SGD instance. How can I change the parameters at the end of batches in optimizer? Thanks in advance.

Looking at the source code for LearningRateScheduler it seems a minimal way to achieve what you want is the following (it did not check how often get_update is called, I'm not even sure if it should be executed on every batch, in any case this callback definitely does adjust the learning rate):

from keras import backend as K
from keras.callbacks import Callback

class BatchLearningRateScheduler(Callback):
    def __init__(self, lr):
        super().__init__()
        self.lr = lr

    def on_batch_end(self, batch, logs=None):
        lr = batch * self.lr
        K.set_value(self.model.optimizer.lr, lr)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM