简体   繁体   中英

Will Trains automagically log Tensorboard HParams?

I know that it's possible to send hyper-params as a dictionary to Trains.

But can it also automagically log hyper-params that are logged using the TF2 HParams module?

Edit: This is done in the HParams tutorial using hp.hparams(hparams) .

Tensorboard HParams

Disclaimer: I'm part of the allegro.ai Trains team

From the screen-grab, it seems like multiple runs with different hyper-parameters , and a parallel coordinates graph for display. This is the equivalent of running the same base experiment multiple times with different hyper-parameters and comparing the results with the Trains web UI, so far so good :)

Based on the HParam interface , one would have to use TensorFlow in order to sample from HP, usually in within the code. How would you extend this approach to multiple experiments? (it's not just automagically logging the hparams but you need to create multiple experiments, one per parameters set)

Wouldn't it make more sense to use an external optimizer to do the optimization? This way you can scale to multiple machines, and have more complicated optimization strategies (like Optuna), you can find a few examples in the trains examples/optimization .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM