[英]How to combine sequential operations with side effects in TensorFlow
I'm developing a GAN in TensorFlow. 我正在TensorFlow中开发GAN。 Currently the training schedule is
目前的培训时间表是
feed_dict = ...
sess.run(discriminator_train_op, feed_dict);
sess.run(generator_train_op, feed_dict);
sess.run(generator_train_op, feed_dict);
We train the generator twice each step because we find that it results in better stability. 我们发现发电机每步两次,因为我们发现它可以带来更好的稳定性。
Now I want to combine the operations together so I only need to feed the network once, as feeding is slow in Tensorflow. 现在我想将这些操作组合在一起,所以我只需要给网络供电一次,因为Tensorflow的供电速度很慢。 I tried
我试过了
with tf.control_dependencies([discriminator_train_op]):
train_op = tf.group(generator_train_op);
with tf.control_dependencies([train_op]):
train_op = tf.group(generator_train_op);
Supposedly control_dependencies
specify one operation must happen after another. 假设
control_dependencies
指定一个操作必须在另一个操作之后进行。 But the profiling timeline shows that certain gradient descent done in generator are parallel to those in discriminator. 但是,分析时间轴表明,生成器中执行的某些梯度下降与鉴别器中的下降是平行的。 In other words, the order is not enforced.
换句话说,不执行该命令。 In addition, I find out by adding debug statements in the network that the combined
train_op
trains only the generator once, not twice. 另外,通过在网络中添加调试语句,我发现组合的
train_op
仅训练生成器一次,而不训练两次。
Is there any way that I can move the control of this sequence of operations from Python to Tensorflow? 有什么方法可以将对这一系列操作的控制从Python移到Tensorflow?
with tf.control_dependencies([discriminator_train_op]):
train_op_g1 = tf.group(generator_train_op);
with tf.control_dependencies([train_op_g1]):
train_op_g2 = tf.group(generator_train_op);
sess.run([discriminator_train_op, train_op_g1, train_op_g2], feed_dict)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.