When creating a new tensorboard logger in pytorch lightning, the two things that are logged by default are the current epoch and the hp_metric. I was able to disable the hp_metric logging by setting default_hp_metric=False
but I can't find anything to disable the logging of the epoch. I've searched in the lightning.py, trainer.py and tensorboard.py files which have the code for the module, the trainer and the tensorboard logger and couldn't find a logging call for epoch anywhere.
This behavior occurs even taking the barebones example from the pytorch lightning tutorial.
Is there a way to disable this logging of epoch to prevent clutter in the tensorboard interface?
You can disable automatically writing epoch
variable by overwriting tensorboard logger.
from pytorch_lightning import loggers
from pytorch_lightning.utilities import rank_zero_only
class TBLogger(loggers.TensorBoardLogger):
@rank_zero_only
def log_metrics(self, metrics, step):
metrics.pop('epoch', None)
return super().log_metrics(metrics, step)
epoch
vs global_step
graph to each logger. (you can see description in here ) if step is None: # added metrics for convenience scalar_metrics.setdefault("epoch", self.trainer.current_epoch) step = self.trainer.global_step # log actual metrics self.trainer.logger.agg_and_log_metrics(scalar_metrics, step=step)
epoch
variable from metric dictionary in log_metrics(metrics, step)
that is called in add_and_log_metrics(scalar_metrics, step=step)
. Code is shown in above. You can see full long version snippet in here .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.