[英]How to write to TensorBoard in TensorFlow 2
I'm quite familiar in TensorFlow 1.x and I'm considering to switch to TensorFlow 2 for an upcoming project.我对 TensorFlow 1.x 非常熟悉,我正在考虑为即将到来的项目切换到 TensorFlow 2。 I'm having some trouble understanding how to write scalars to TensorBoard logs with eager execution , using a custom training loop .我在理解如何使用自定义训练循环将标量写入 TensorBoard 日志时遇到了一些麻烦。
In tf1 you would create some summary ops (one op for each thing you would want to store), which you would then merge into a single op, run that merged op inside a session and then write this to a file using a FileWriter object.在 tf1 中,您将创建一些摘要操作(您要存储的每个操作一个操作),然后将其合并为一个操作,在会话中运行该合并操作,然后使用 FileWriter 对象将其写入文件。 Assuming sess
is our tf.Session()
, an example of how this worked can be seen below:假设sess
是我们的tf.Session()
,它是如何工作的一个例子可以在下面看到:
# While defining our computation graph, define summary ops:
# ... some ops ...
tf.summary.scalar('scalar_1', scalar_1)
# ... some more ops ...
tf.summary.scalar('scalar_2', scalar_2)
# ... etc.
# Merge all these summaries into a single op:
merged = tf.summary.merge_all()
# Define a FileWriter (i.e. an object that writes summaries to files):
writer = tf.summary.FileWriter(log_dir, sess.graph)
# Inside the training loop run the op and write the results to a file:
for i in range(num_iters):
summary, ... = sess.run([merged, ...], ...)
writer.add_summary(summary, i)
The problem is that sessions don't exist anymore in tf2 and I would prefer not disabling eager execution to make this work.问题是 tf2 中不再存在会话,我不希望禁用急切执行来完成这项工作。 The official documentation is written for tf1 and all references I can find suggest using the Tensorboard keras callback. 官方文档是为 tf1 编写的,我能找到的所有参考资料都建议使用 Tensorboard keras 回调。 However, as far as I know, this only works if you train the model through model.fit(...)
and not through a custom training loop .但是,据我所知,这仅在您通过model.fit(...)
而不是通过自定义训练循环训练模型时才有效。
tf.summary
functions, outside of a session. tf.summary
函数的 tf1 版本,在会话之外。 Obviously any combination of these functions fails, as FileWriters, merge_ops, etc. don't even exist in tf2.显然,这些函数的任何组合都会失败,因为 FileWriters、merge_ops 等甚至在 tf2 中都不存在。tf.summary()
. 这篇中篇文章指出,在包括tf.summary()
在内的一些 tensorflow API 中已经进行了“清理”。 They suggest using from tensorflow.python.ops.summary_ops_v2
, which doesn't seem to work.他们建议使用from tensorflow.python.ops.summary_ops_v2
,这似乎不起作用。 This implies using a record_summaries_every_n_global_steps
;这意味着使用record_summaries_every_n_global_steps
; more on this later.稍后会详细介绍。tf.contrib.summary
and tf.contrib.FileWriter
.一系列其他帖子1 、 2 、 3建议使用tf.contrib.summary
和tf.contrib.FileWriter
。 However, tf.contrib
has been removed from the core TensorFlow repository and build process .但是, tf.contrib
已从核心 TensorFlow 存储库和构建过程中删除。tf.contrib
summaries along with the record_summaries_every_n_global_steps
mentioned previously. 来自官方 repo 的 TensorFlow v2 展示,它再次使用tf.contrib
摘要以及前面提到的record_summaries_every_n_global_steps
。 I couldn't make this to work either (even without using the contrib library).我也不能让它工作(即使不使用 contrib 库)。My questions are:我的问题是:
tf.summary
in TensroFlow 2?有没有办法在 TensroFlow 2 中正确使用tf.summary
?model.fit()
)?如果没有,当使用自定义训练循环(不是model.fit()
)时,是否有另一种方法可以在 TensorFlow 2 中编写 TensorBoard 日志?Yes, there is a simpler and more elegant way to use summaries in TensorFlow v2.是的,在 TensorFlow v2 中有一种更简单、更优雅的方法来使用摘要。
First, create a file writer that stores the logs (eg in a directory named log_dir
):首先,创建一个存储日志的文件log_dir
(例如在名为log_dir
的目录中):
writer = tf.summary.create_file_writer(log_dir)
Anywhere you want to write something to the log file (eg a scalar) use your good old tf.summary.scalar
inside a context created by the writer.在您想向日志文件(例如标量)写入内容的任何地方,请在作者创建的上下文中使用旧的tf.summary.scalar
。 Suppose you want to store the value of scalar_1
for step i
:假设您要为步骤i
存储scalar_1
的值:
with writer.as_default():
tf.summary.scalar('scalar_1', scalar_1, step=i)
You can open as many of these contexts as you like inside or outside of your training loop.您可以根据需要在训练循环内外打开任意数量的上下文。
Example:例子:
# create the file writer object
writer = tf.summary.create_file_writer(log_dir)
for i, (x, y) in enumerate(train_set):
with tf.GradientTape() as tape:
y_ = model(x)
loss = loss_func(y, y_)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# write the loss value
with writer.as_default():
tf.summary.scalar('training loss', loss, step=i+1)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.