[英]Update loss value each batch to get an averaged epoch loss
I want to create an operation similar to the one you can obtain using tf.metrics and their update_op value. 我想创建一个类似于使用tf.metrics及其update_op值可以获得的操作。 When you execute this in tf:
在tf中执行此命令时:
acc, update_op = tf.metrics.accuracy(tf.argmax(probs, 1), labels, name="accuracy")
The update_op value is updated in each call. 在每个调用中都会更新update_op值。
So I want to do the same with the loss. 所以我想对损失做同样的事情。 I have tried the following code: update_loss = tf.Variable(0., name="loss")
我尝试了以下代码:update_loss = tf.Variable(0。,name =“ loss”)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=model.logits, labels=labels))
update_loss.assign(update_loss + loss)
But always I run: 但是我总是跑:
init_vars = [tf.local_variables_initializer(),
tf.global_variables_initializer()]
with tf.Session() as sess:
loss_val = sess.run(update_loss)
I get a value of 0. Any idea? 我的值为0。有什么想法吗?
I must point that the value of tensor loss is not zero during the execution 我必须指出在执行过程中张量损失的值不为零
Ok, I have discovered a plausible solution which can work but it's not really solving my doubt... based on this post (5.2 Tensorflow - Batch accuracy ) 好的,我发现了一个可行的解决方案,可以解决这个问题,但并不能真正解决我的疑问……基于此帖子 (5.2 Tensorflow-批处理精度)
It consists on creating a function that uses the last obtained loss value and pass it to a function through feed_dict that updates a placeholder with the cumulative value: 它包括创建一个使用最后获得的损失值的函数,并通过feed_dict将其传递给函数,该函数使用累计值更新占位符:
session.run(tf_metric_update, feed_dict=feed_dict)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.