[英]DistributedDataParallel with gpu device ID specified in PyTorch
I want to train my model through DistributedDataParallel on a sinle machine that has 8 GPUs.我想在具有 8 个 GPU 的单机上通过 DistributedDataParallel 训练我的模型。 But I want to train my model on four specified GPUs with device IDs 4, 5, 6, 7.但我想在设备 ID 为 4、5、6、7 的四个指定 GPU 上训练我的模型。
How to specify the GPU device ID for DistributedDataParallel?如何为 DistributedDataParallel 指定 GPU 设备 ID?
I think the world size will be 4 for this case, but what should be the rank in this case?我认为在这种情况下世界大小将是 4,但在这种情况下应该是什么等级?
You can set the environment variable CUDA_VISIBLE_DEVICES
.您可以设置环境变量CUDA_VISIBLE_DEVICES
。 Torch will read this variable and only use the GPUs specified in there. Torch 将读取此变量并仅使用其中指定的 GPU。 You can either do this directly in your python code like this:您可以直接在 Python 代码中执行此操作,如下所示:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '4, 5, 6, 7'
Take care to execute this command before you initialize torch in any way, else the statement will not take effect.在您以任何方式初始化 Torch 之前,请注意执行此命令,否则该语句将不会生效。 The other option would be to set the environment variable temporarily before starting your script in the shell:另一个选项是在 shell 中启动脚本之前临时设置环境变量:
CUDA_VISIBLE_DEVICES=4,5,6,7 python your_script.py
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.