简体   繁体   English

PyTorch TensorBoard 写入频率

[英]PyTorch TensorBoard write frequency

I'm trying to write my training and validation losses to tensorboard using torch (torch.utils.tensorboard) and it looks like it only writes up to 1000 data points, no matter what the actual number of iterations are.我正在尝试使用火炬 (torch.utils.tensorboard) 将我的训练和验证损失写入张量板,它看起来最多只能写入 1000 个数据点,无论实际迭代次数是多少。 For example, running the following code,例如,运行以下代码,

writer1 = SummaryWriter('runs/1')
writer2 = SummaryWriter('runs/2')

for i in range(2000):
    writer1.add_scalar('tag', 1, i)

for i in range(20000):
    writer2.add_scalar('tag', 1, i)

both yield 1000 points exactly when examining and downloaded csv, and even on the tensorboard dashboard, the first points start at steps 5 and 18 and increment such that the total number of steps are 1000, rather than 2,000 and 20,000.在检查和下载 csv 时,两者都精确地产生 1000 个点,即使在 tensorboard 仪表板上,第一个点从第 5 步和第 18 步开始并递增,以便总步数为 1000,而不是 2,000 和 20,000。

I don't know if this is tensorboard's default behaviour or if its PyTorch's decision, but either way, is there a way to write every single step?我不知道这是 tensorboard 的默认行为还是 PyTorch 的决定,但无论哪种方式,有没有办法编写每一步?

Actually I found the answer here .其实我在这里找到了答案。 So the SummaryWriter is saving at every epoch, but to load everything, tensorboard has to be started with the flag --samples_per_plugin scalars=0 .因此,SummaryWriter 在每个时期都进行保存,但要加载所有内容,必须以--samples_per_plugin scalars=0标志启动--samples_per_plugin scalars=0 0 tells tensorboard to load all points, while 100 would mean a total of 100 points for example 0 告诉 tensorboard 加载所有点,而 100 表示总共 100 个点,例如

To sum up, I started tensorboard with the command tensorboard --logdir=logs --samples_per_plugin scalars=0总结一下,我用命令tensorboard --logdir=logs --samples_per_plugin scalars=0启动了tensorboard

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM