My gpu is detected on
torch.cuda.is_available()
torch.randn(1).cuda()
tensorflow.test.is_built_with_cuda()
torch.cuda.device_count()
but not on
device_lib.list_local_devices()
tensorflow.config.list_local_devices('GPU')
the code I used is:
print(torch.cuda.is_available())
print(torch.randn(1).cuda())
print(device_lib.list_local_devices())
print(tf.test.is_built_with_cuda())
print(tf.__version__)
print(tf.config.list_physical_devices('GPU'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
print(torch.cuda.device_count())
and the result is:
True
tensor([0.7429], device='cuda:0')
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 11914755976629927437
xla_global_id: -1
]
True
2.10.0
[]
cuda:0
1
I'm using NVIDIA GTX 1070 Ti, Nvidia graphics driver 460.89, cuda 11.2, cudnn 8.1.1, torch 1.7.1+cu110, torchvision 0.8.2+cu110.
Although the result above, my deeplearning model successfully got in cuda. The problem is that I cannot get my tensor data to gpu. When I checked the datatype with print(type(x))
, it returned <class 'torch.Tensor'>
.
I then tried both x.to(device)
and x.cuda()
.
But in both cases, it returns false to x.is_cuda
.
When I tried to make this data go through my model, it returned this: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
Which means my model is on gpu. I can't figure out why my tensor is on cpu but my model isn't.
My data is a tensor converted from image, and is a shape of [3, 3, 512, 512]. My model is a GAN.
Never mind.
I didn't reassign x.
In case someone bump into this, using x = x.cuda()
instead of x.cuda()
made it fixed.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.