简体   繁体   English

Trains 会自动记录 Tensorboard HParams 吗?

[英]Will Trains automagically log Tensorboard HParams?

I know that it's possible to send hyper-params as a dictionary to Trains.我知道可以将超参数作为字典发送到火车。

But can it also automagically log hyper-params that are logged using the TF2 HParams module?但是它也可以自动记录使用 TF2 HParams 模块记录的超参数吗?

Edit: This is done in the HParams tutorial using hp.hparams(hparams) .编辑:这是在HParams 教程中使用hp.hparams(hparams)

Tensorboard HParams

Disclaimer: I'm part of the allegro.ai Trains team免责声明:我是 allegro.ai Trains 团队的一员

From the screen-grab, it seems like multiple runs with different hyper-parameters , and a parallel coordinates graph for display.从屏幕抓取来看,似乎是多次运行具有不同的超参数,以及用于显示的平行坐标图。 This is the equivalent of running the same base experiment multiple times with different hyper-parameters and comparing the results with the Trains web UI, so far so good :)这相当于使用不同的超参数多次运行相同的基础实验并将结果与​​ Trains Web UI 进行比较,到目前为止一切顺利:)

Based on the HParam interface , one would have to use TensorFlow in order to sample from HP, usually in within the code.基于HParam 接口,必须使用 TensorFlow 才能从 HP 采样,通常在代码中。 How would you extend this approach to multiple experiments?您如何将这种方法扩展到多个实验? (it's not just automagically logging the hparams but you need to create multiple experiments, one per parameters set) (这不仅仅是自动记录 hparams 而是您需要创建多个实验,每个参数集一个)

Wouldn't it make more sense to use an external optimizer to do the optimization?使用外部优化器进行优化不是更有意义吗? This way you can scale to multiple machines, and have more complicated optimization strategies (like Optuna), you can find a few examples in the trains examples/optimization .这样你就可以扩展到多台机器,并且有更复杂的优化策略(比如 Optuna),你可以在 train examples/optimization 中找到一些例子。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM