[英]Is there a way to figure out whether PyTorch model is on cpu or on the device?
I would like to figure out, whether the PyTorch
model is on cpu
or cuda
in order to initialize some other variable as Torch.Tensor
or Torch.cuda.Tensor
depending on the model. I would like to figure out, whether the PyTorch
model is on cpu
or cuda
in order to initialize some other variable as Torch.Tensor
or Torch.cuda.Tensor
depending on the model.
However, looking at the output of the dir()
function I see only .cpu()
, .cuda()
, to()
methods which put the model on device, GPU or other device, specified in to. However, looking at the output of the dir()
function I see only .cpu()
, .cuda()
, to()
methods which put the model on device, GPU or other device, specified in to. For PyTorch
tensor there is is_cuda
attribute, but no analogue for the whole model.对于PyTorch
张量有is_cuda
属性,但对于整个 model 没有类似物。
Is there some way to deduce this for a model, or one needs to refer to a particular weight?有什么方法可以为 model 推断这一点,还是需要参考特定的重量?
No, there is no such function for nn.Module
, I believe this is because parameters could be on multiple devices at the same time.不, nn.Module 没有这样的nn.Module
,我相信这是因为参数可以同时在多个设备上。
If you're working with a single device, a workaround is to check the first parameter:如果您使用的是单个设备,解决方法是检查第一个参数:
next(model.parameters()).is_cuda
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.