简体   繁体   English

Trains 应该如何与 RayTune 等超参数优化工具一起使用?

[英]How should Trains be used with hyper-param optimization tools like RayTune?

What could be a reasonable setup for this?什么是合理的设置? Can I call Task.init() multiple times in the same execution?我可以在同一次执行中多次调用 Task.init() 吗?

Disclaimer: I'm part of the allegro.ai Trains team免责声明:我是 allegro.ai Trains 团队的一员

One solution is to inherit from trains.automation.optimization.SearchStrategy and extend the functionality.一种解决方案是从trains.automation.optimization.SearchStrategy继承并扩展功能。 This is similar to the Optuna integration, where Optuna is used for the Bayesian optimization and Trains does the hyper-parameter setting, launching experiments, and retrieving performance metrics.这类似于Optuna集成,其中 Optuna 用于贝叶斯优化,而 Trains 执行超参数设置、启动实验和检索性能指标。

Another option (not scalable but probably easier to start with), is to use have the RayTuner run your code (obviously setting the environment / git repo / docker etc is on the user), and have your training code look something like:另一种选择(不可扩展但可能更容易开始)是使用 RayTuner 运行您的代码(显然设置环境/git repo/docker 等在用户身上),并使您的训练代码看起来像:

# create new experimnt
task = Task.init('hp optimization', 'ray-tuner experiment', reuse_last_task_id=False)
# store the hyperparams (assuming hparam is a dict) 
task.connect(hparam) 
# training loop here
# ...
# shutdown experimnt
task.close()

This means every time the RayTuner executes the script a new experiment will be created, with new set of hyper parameters (assuming haparm is a dictionary, it will be registered on the experiment as hyper-parameters)这意味着每次 RayTuner 执行脚本时都会创建一个新的实验,并带有一组新的超参数(假设haparm是一个字典,它将在实验中注册为超参数)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM