简体   繁体   English

张量流中的多个gpus和AdamOptimizer

[英]multiple gpus and AdamOptimizer in tensorflow

I'm practicing tensorflow with multiple gpus. 我正在用多个gpus练习张量流。 Averaging gradients computed by each gpu. 每个gpu计算的平均梯度。 However it doesn't work when my optimizer is AdamOptimizer . 但是,当我的优化器是AdamOptimizer时,它不起作用。 It always works when I'm using GradientDescent . 当我使用GradientDescent时它总是有效。

Here is code: 这是代码:

G = tf.Graph()
with G.as_default(), tf.device('/cpu:0'):
    full_data_dims = [batch_size*num_gpus] + data_dims
    data = tf.placeholder(dtype=tf.float32, shape=full_data_dims, name='data')
    labels = tf.placeholder(dtype=tf.int32, shape=[batch_size*num_gpus], name='labels')

    split_data = tf.split(data, num_gpus, axis=0)
    split_labels = tf.split(labels, num_gpus, axis=0)

    optimizer = tf.train.AdamOptimizer(learning_rate)

    replica_grads = []
    for i in range(num_gpus):
        with tf.name_scope('tower_{}'.format(i)), tf.device('/gpu:{}'.format(i)):

            model = build_model(split_data[i], split_labels[i])
            loss = model['loss']
            grads = optimizer.compute_gradients(loss)
            replica_grads.append(grads)
            tf.get_variable_scope().reuse_variables()


    tf.get_variable_scope().reuse_variables()
    average_grad = average_gradients_layer(replica_grads)
    grad_step = optimizer.apply_gradients(average_grad)
    train_step = tf.group(grad_step)
    init = tf.global_variables_initializer()

# Part3
config_proto = tf.ConfigProto(allow_soft_placement=True)
sess = tf.Session(graph=G, config=config_proto)
sess.run(init)
tf.train.start_queue_runners(sess=sess)
with sess.as_default():
    for step in range(num_steps):
        data_batch, label_batch = batch_maker(X_ok, y_ok, X_ng, y_ng, batch_size*num_gpus)
        results = sess.run([train_step, loss], feed_dict={data : data_batch, labels : label_batch})
        if step % flag == 0:
            print('\n')
            print('step : %s loss : %s' % (step, results[1]))
        sys.stdout.write('\r'+str(step)+'/'+str(num_steps))

Here is my error message : 这是我的错误消息:

 32     tf.get_variable_scope().reuse_variables()
 33     average_grad = average_gradients_layer(replica_grads)
---> 34     grad_step = optimizer.apply_gradients(average_grad)
 35     train_step = tf.group(grad_step)
 36     init = tf.global_variables_initializer()

Variable conv1_1/weight/Adam/ does not exist, or was not created with 
tf.get_variable(). Did you mean to set reuse=None in VarScope?

It seems AdamOptimizer seeks additional '/Adam/' after my variable name. 似乎AdamOptimizer在我的变量名之后寻找额外的'/Adam/' Can any one fix it? 任何人都可以解决它吗?

I don't know if there is a bug or not, but the question was "can anyone fix it". 我不知道是否有错误,但问题是“任何人都可以解决它”。 Yes. 是。

Encapsulate the gpu loop (but not the apply_gradients code) with a "with tf.variable_scope" contextmanager so that the scope stops being reused once the gpu loop is exited. 使用“with tf.variable_scope”contextmanager封装gpu循环(但不是apply_gradients代码),以便在退出gpu循环后停止重用作用域。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM