简体   繁体   中英

Update loss value each batch to get an averaged epoch loss

I want to create an operation similar to the one you can obtain using tf.metrics and their update_op value. When you execute this in tf:

acc, update_op = tf.metrics.accuracy(tf.argmax(probs, 1), labels, name="accuracy")

The update_op value is updated in each call.

So I want to do the same with the loss. I have tried the following code: update_loss = tf.Variable(0., name="loss")

loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=model.logits, labels=labels))
update_loss.assign(update_loss + loss)

But always I run:

init_vars = [tf.local_variables_initializer(), 

tf.global_variables_initializer()]
with tf.Session() as sess:
    loss_val = sess.run(update_loss)

I get a value of 0. Any idea?

EDIT:

I must point that the value of tensor loss is not zero during the execution

Ok, I have discovered a plausible solution which can work but it's not really solving my doubt... based on this post (5.2 Tensorflow - Batch accuracy )

It consists on creating a function that uses the last obtained loss value and pass it to a function through feed_dict that updates a placeholder with the cumulative value:

session.run(tf_metric_update, feed_dict=feed_dict)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM