简体   繁体   中英

Advice on how to create a custom tf.keras optimizer (optimizer_v2)

I want to make an accumulated SGD optimizer for tf.keras (not keras standalone). I have found a couple of implementations of standalone keras accumulated SGD optimizers including this one on pypi. Nevertheless, I am using a project which make use of tf.keras . And as I have seen it's not a good idea to mix them together.

The problem is that the documentation for achieving this custom optimizer is not really straight forward. The base class (which I should inherit from) is Optimizer_v2.py which contains some information in the comment section about the task.

The required methods that should be overridden are: - resource_apply_dense (update variable given gradient tensor is dense) - resource_apply_sparse (update variable given gradient tensor is sparse) - create_slots (if your optimizer algorithm requires additional variables) - get_config (serialization of the optimizer, include all hyper parameters)

Of course of these ones only get_config() actually exists in the base class. resource_apply_dense is actually _resource_apply_dense , resource_apply_sparse is _resource_apply_sparse and create_slots does not even exist in base class. In subclasses as SGD in gradient_decent.py , create_slots also exists as _create_slots .

Anyway, apparently the documentation is not updated (there is also an issue regarding this in git but I don't remember the link which pointed this lack of consistency with the documentation) but this makes the whole procedure difficult. For example in SGD I have to override the _resource_apply_dense() method but I cannot understand where the gradients are being calculated and where they are updated.

The actual code is given below:

def _resource_apply_dense(self, grad, var, apply_state=None):
        var_device, var_dtype = var.device, var.dtype.base_dtype
        coefficients = ((apply_state or {}).get((var_device, var_dtype))
                        or self._fallback_apply_state(var_device, var_dtype))

        if self._momentum:
          momentum_var = self.get_slot(var, "momentum")
          return training_ops.resource_apply_keras_momentum(
              var.handle,
            momentum_var.handle,
            coefficients["lr_t"],
            grad,
            coefficients["momentum"],
            use_locking=self._use_locking,
            use_nesterov=self.nesterov)
        else:
            return training_ops.resource_apply_gradient_descent(
                var.handle, coefficients["lr_t"], grad, use_locking=self._use_locking)

which obviously rely on training_ops.resource_apply_keras_momentum and training_ops.resource_apply_gradient_descent to do the actual job. How can I split the 2 parts mentioned in the minimize() method in OptimizerV2 from the above code? The 2 parts are: _compute_gradients() and apply_gradients() .

There are a lot of parts that are confusing in this comments like for example in the base class:

Many optimizer subclasses, such as Adam and Adagrad allocate and manage additional variables associated with the variables to train. These are called Slots . Slots have names and you can ask the optimizer for the names of the slots that it uses.

although if I declare an Adam optimizer and ask for slot names I get an empty list (?).

optimizer = Adam(lr=1e-3)    
optimizer.get_slot_names()

[]

Another confusing issue is the use of private methods which is not clear when they are called and what's their purpose. For example _prepare_local() is contained within SGD and includes a line:

apply_state[(var_device, var_dtype)]["momentum"] = array_ops.identity(self._get_hyper("momentum", var_dtype))

Anyway, the problem here is that I do not know which exactly approach to follow to create a custom tf.keras optimizer. Instructions included in comments seem to contradict with the actual implemented subclasses, and the latter also seem to assign the dirty work to the actual C++ function without being clear how this is done or how (in my case) to separate the actions (like the gradient calculation and application). So, is there any advice someone can provide on how to proceed and steps to follow to accomplish this (relatively) simple task?

I am using tf 1.15 by the way (so the links are from there).

Reference for optimizer: DiffGrad (kind of Adam like)
https://github.com/evanatyourservice/diffGrad-tf/blob/master/diffgrad.py
It is based on a paper called DiffGrad, they have good explanations and generally a good read.

First of all good question, secondly TensorFlow documentation can do a lot better. Answers to various questions in no particular order:

In reference to empty slot list for Adam , you have to run a model.fit once on a model for it to initialize as far as I have seen. Remember reading about it while looking up saving and loading optimizer states (check if it works on model.compile).

As for _prepare_local , that line creates the momentum variable from the hyper parameter you set on creation. I suppose it makes it accessible to all the weights the optimizer is trying to update, why they use identity is deep TensorFlow graph stuff.

Why they use _prepare_local generally is to create variables that are common across all weighs that are being updated like decays or learning rates or time steps and such. For every Iteration these variables are common across all variables tracked in the optimizer's var_list.

Unlike the above _prepare_local, slots are separate variables for each weight tracked by the optimizer so you might have moments or history or cumulative sum. Anything to do with that specific individual weight.

Gradient compute and apply: If I understand this correctly compute gradients takes the loss does back propagation and auto differentiation and gets you the "gradients" for each weight. when you go to apply it is when the optimizer comes into play with its slots and variables. finally optimizer does the updating with the computed gradients as inputs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM