简体   繁体   中英

CUDA problem. I think GPU might not be detected

I try to install this: https://github.com/bycloudai/CVPR2022-DaGAN-Window

and used

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
to install torch.

There is a little problem.

The error that I get when I try to run the next command in ANACONDA (conda):

python demo.py  --config config/vox-adv-256.yaml --driving_video driving/driving.mp4 --source_image input/input.jpg --checkpoint checkpoints/SPADE_DaGAN_vox_adv_256.pth.tar --relative --adapt_scale --kp_num 15 --generator SPADEDepthAwareGenerator

ERROR:

UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg)
Traceback (most recent call last):
File "demo.py", line 165, in <module>
loaded_dict_enc = torch.load('depth/models/weights_19/encoder.pth')
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 1131, in _load
result = unpickler.load()
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 1101, in persistent_load
load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 1083, in load_tensor
wrap_storage=restore_location(storage, location),
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 215, in default_restore_location
result = fn(storage, location)
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "C:\Users\Qwepy\anaconda32\envs\DaGAN\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

What should I do? I have version of CUDA toolkit 11.3.1 with RTX 2060 SETUP. I think there is a problem from the settings of this program or because It doesn't detect my GPU.

I tried to reinstall cuda and matched the same version with the one that I installed with pytorch. I also reinstalled pytorch to be sure that it is not the cpuonly one.

You don't need separately install CUDA apart from what is installed along with PyTorch within the conda environment. Kindly check the NVIDIA's gpu driver versions and if they are compatible with the CUDA toolkit. If not, you have to either upgrade or downgrade it accordingly.

try to install the newest version of pytorch by following this https://pytorch.org/get-started/locally/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM