简体   繁体   中英

PyTorch TensorBoard write frequency

I'm trying to write my training and validation losses to tensorboard using torch (torch.utils.tensorboard) and it looks like it only writes up to 1000 data points, no matter what the actual number of iterations are. For example, running the following code,

writer1 = SummaryWriter('runs/1')
writer2 = SummaryWriter('runs/2')

for i in range(2000):
    writer1.add_scalar('tag', 1, i)

for i in range(20000):
    writer2.add_scalar('tag', 1, i)

both yield 1000 points exactly when examining and downloaded csv, and even on the tensorboard dashboard, the first points start at steps 5 and 18 and increment such that the total number of steps are 1000, rather than 2,000 and 20,000.

I don't know if this is tensorboard's default behaviour or if its PyTorch's decision, but either way, is there a way to write every single step?

Actually I found the answer here . So the SummaryWriter is saving at every epoch, but to load everything, tensorboard has to be started with the flag --samples_per_plugin scalars=0 . 0 tells tensorboard to load all points, while 100 would mean a total of 100 points for example

To sum up, I started tensorboard with the command tensorboard --logdir=logs --samples_per_plugin scalars=0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM