简体   繁体   English

在张量流中使用Adam优化器TWICE

[英]use Adam optimizer TWICE in tensorflow

I am trying to use adam optimizer twice to minimize different tensor in my code, I have tried use GradientDescentOptimizer twice, it's fine, but I get wrong message when using adam optimizer twice, I asked another question at: tensorflowVariable RNNLM/RNNLM/embedding/Adam_2/ does not exist , but that solution doesn't work in here. 我尝试两次使用adam优化器以最小化代码中的不同张量,我曾尝试两次使用GradientDescentOptimizer,这很好,但是两次使用adam优化器时收到错误消息,我在以下位置问了另一个问题: tensorflowVariable RNNLM / RNNLM / embedding / Adam_2 /不存在 ,但是该解决方案在这里不起作用。 I also look up page: https://github.com/tensorflow/tensorflow/issues/6220 , But I still don't understand. 我还在查找页面: https : //github.com/tensorflow/tensorflow/issues/6220 ,但我还是不明白。

Here is my code, I get Error message:ValueError: Variable NN/NN/W/Adam_2/ does not exist, or was not created with tf.get_variable(). 这是我的代码,出现错误消息:ValueError:变量NN / NN / W / Adam_2 /不存在,或者不是使用tf.get_variable()创建的。 Did you mean to set reuse=None in VarScope? 您是说要在VarScope中设置“ reuse = None”?

Then I tried the solution at tensorflowVariable RNNLM/RNNLM/embedding/Adam_2/ does not exist , but doesn't work 然后我在tensorflow上尝试了解决方案变量RNNLM / RNNLM / embedding / Adam_2 /不存在 ,但不起作用

import tensorflow as tf

def main():
    optimizer = tf.train.GradientDescentOptimizer(0.005)
    # optimizer = tf.train.AdamOptimizer(0.005)

    with tf.variable_scope('NN') as scope:
        W = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y1 = W + X
        loss_1 = tf.reduce_mean(tf.abs(y_ - y1))


        # train_op1 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_1)
        train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)
        # with tf.variable_scope('opt'):
        #     train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)

        ##############################################################################################
        scope.reuse_variables()

        W2 = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X2 = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        b = tf.Variable(tf.random_normal(shape=[5, 1], dtype=tf.float32))
        y2 = W2 + X2 + b
        loss_2 = tf.reduce_mean(tf.abs(y_ - y2))

        # train_op2 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_2)
        train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)
        # with tf.variable_scope('opt'):
        #     train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)


if __name__ == '__main__':
    main()

If you absolutely have to do it in the same scope, make sure all variables are defined in time. 如果绝对必须在同一范围内执行此操作,请确保及时定义了所有变量。 I'd have to do some more research on why it works like this, but the optimizer settings are locked in the graph at a lower level, no longer dynamically accessible. 我必须对其为何如此工作进行更多研究,但是优化器设置被锁定在较低级别的图形中,无法再动态访问。

Minimal working example: 最小的工作示例:

import tensorflow as tf

def main():
    optimizer = tf.train.GradientDescentOptimizer(0.005)
    # optimizer = tf.train.AdamOptimizer(0.005)

    with tf.variable_scope('NN') as scope:
        assert scope.reuse == False
        W2 = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X2 = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y2_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        b = tf.get_variable(name='b', initializer=tf.random_normal(shape=[5, 1], dtype=tf.float32))
        y2 = W2 + X2 + b
        loss_2 = tf.reduce_mean(tf.abs(y2_ - y2))

        # train_op2 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_2)
        train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)


        # with tf.variable_scope('opt'):
        #     train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)

    ##############################################################################################
    with tf.variable_scope('NN', reuse = True) as scope:


        W = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        X = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        y_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1]))
        b = tf.get_variable(name='b', initializer=tf.random_normal(shape=[5, 1], dtype=tf.float32))

        y1 = W + X
        loss_1 = tf.reduce_mean(tf.abs(y_ - y1))


        # train_op1 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_1)
        train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1)
        # with tf.variable_scope('opt'):
        #     train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2)


if __name__ == '__main__':
    main()

The simplest way to fix this problem is put the second optimizer in a different scope. 解决此问题的最简单方法是将第二个优化器放在不同的范围内。 This way the naming does not cause any confusion. 这样,命名不会引起任何混淆。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM