简体   繁体   中英

Three loss functions in a Tensorflow GAN

I've been following a guide on O'Reilly and when it comes to their training and loss functions I'm rather perplexed.

d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(Dx,tf.ones_like(Dx)))
d_loss_fake =tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(Dg,tf.zeros_like(Dg)))

g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(Dg, tf.ones_like(Dg)))

This feels like the d_loss_real and d_loss_fake should be added into a single variable and that is what should be minimized like so:

d_loss = d_loss_real + d_loss_fake

d_trainer = tf.train.AdamOptimizer(0.0003).minimize(d_loss, var_list=d_vars)

However the guide specifies three training functions rather than two, one for d_loss_real, d_loss_fake and g_loss.

# Train the discriminator
d_trainer_fake = tf.train.AdamOptimizer(0.0003).minimize(d_loss_fake, 
var_list=d_vars)
d_trainer_real = tf.train.AdamOptimizer(0.0003).minimize(d_loss_real, 
var_list=d_vars)
# Train the generator
g_trainer = tf.train.AdamOptimizer(0.0001).minimize(g_loss, var_list=g_vars)

Am I right in feeling that this is wrong, or am I missing something?

You just need two optimizer at all, one for generator and one for discriminator network:

1) discriminator_trainer = tf.train.AdamOptimizer(0.0003).minimize(d_loss, var_list=d_vars)

2)generator_trainer = tf.train.GradientDescentOptimizer(0.0003).minimize(g_loss, var_list=g_vars)

-Use gradient descent optimizer in your generator network for better result.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM