[英]rasa: training is too slow
I have rtx 3090 gpu and i9 12th gen processor.我有 rtx 3090 gpu 和 i9 第 12 代处理器。 my training is not too large as well and yet the training time is too long.
我的训练也不是太大,但训练时间太长了。 When I begin the training phase it says 24 cores available but limiting to safe limit of only 8 cores.
当我开始训练阶段时,它说有 24 个核心可用,但限制为只有 8 个核心的安全限制。 NUMEXPR_MAX_THREADS not set.
NUMEXPR_MAX_THREADS 未设置。
In your terminal add the NUMEXPR_MAX_THREADS
to your terminal.在您的终端中,将
NUMEXPR_MAX_THREADS
添加到您的终端。 You can do so by writing in your CLI: export NUMEXPR_MAX_THREADS="24"
if you want to use all of them.您可以通过在 CLI 中编写来实现:
export NUMEXPR_MAX_THREADS="24"
如果您想使用所有这些。 This will work until you close your terminal.这将一直有效,直到您关闭终端。 You can add it permanently to your terminal profile (.bash_profile, ~/.zshrc...) Regarding slow execution, that depends on your rasa config choices and the number of stories/rules.
您可以将它永久添加到您的终端配置文件(.bash_profile、~/.zshrc ...)关于执行缓慢,这取决于您的 rasa 配置选择和故事/规则的数量。 Finally, you need to pass the param
use_gpu = True
in your config for TedPolicy t make it train TED faster.最后,您需要在 TedPolicy 的配置中传递参数
use_gpu = True
以使其更快地训练 TED。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.