简体   繁体   中英

use Adam optimizer TWICE in tensorflow

I am trying to use adam optimizer twice to minimize different tensor in my code, I have tried use GradientDescentOptimizer twice, it's fine, but I get wrong message when using adam optimizer twice, I asked another question at: tensorflowVariable RNNLM/RNNLM/embedding/Adam_2/ does not exist , but that solution doesn't work in here. I also look up page: https://github.com/tensorflow/tensorflow/issues/6220 , But I still don't understand.

Here is my code, I get Error message:ValueError: Variable NN/NN/W/Adam_2/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

Then I tried the solution at tensorflowVariable RNNLM/RNNLM/embedding/Adam_2/ does not exist , but doesn't work

import tensorflow as tf

def main():
    optimizer = tf.train.GradientDescentOptimizer(0.005)
    # optimizer = tf.train.AdamOptimizer(0.005)

    with tf.variable_scope('NN') as scope:
        W = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y1 = W + X
        loss_1 = tf.reduce_mean(tf.abs(y_ - y1))


        # train_op1 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_1)
        train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)
        # with tf.variable_scope('opt'):
        #     train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)

        ##############################################################################################
        scope.reuse_variables()

        W2 = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X2 = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        b = tf.Variable(tf.random_normal(shape=[5, 1], dtype=tf.float32))
        y2 = W2 + X2 + b
        loss_2 = tf.reduce_mean(tf.abs(y_ - y2))

        # train_op2 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_2)
        train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)
        # with tf.variable_scope('opt'):
        #     train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)


if __name__ == '__main__':
    main()

If you absolutely have to do it in the same scope, make sure all variables are defined in time. I'd have to do some more research on why it works like this, but the optimizer settings are locked in the graph at a lower level, no longer dynamically accessible.

Minimal working example:

import tensorflow as tf

def main():
    optimizer = tf.train.GradientDescentOptimizer(0.005)
    # optimizer = tf.train.AdamOptimizer(0.005)

    with tf.variable_scope('NN') as scope:
        assert scope.reuse == False
        W2 = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X2 = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y2_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        b = tf.get_variable(name='b', initializer=tf.random_normal(shape=[5, 1], dtype=tf.float32))
        y2 = W2 + X2 + b
        loss_2 = tf.reduce_mean(tf.abs(y2_ - y2))

        # train_op2 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_2)
        train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)


        # with tf.variable_scope('opt'):
        #     train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)

    ##############################################################################################
    with tf.variable_scope('NN', reuse = True) as scope:


        W = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        b = tf.get_variable(name='b', initializer=tf.random_normal(shape=[5, 1], dtype=tf.float32))

        y1 = W + X
        loss_1 = tf.reduce_mean(tf.abs(y_ - y1))


        # train_op1 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_1)
        train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)
        # with tf.variable_scope('opt'):
        #     train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)


if __name__ == '__main__':
    main()

The simplest way to fix this problem is put the second optimizer in a different scope. This way the naming does not cause any confusion.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM