[英]How can I use GPU in Docker image with Theano launched from Windows host?
I want to run Theano via Docker image on my PC with Windows installed. 我想在装有Windows的PC上通过Docker映像运行Theano。 The Docker image contains Ubuntu system, CUDA drivers and Theano ( https://hub.docker.com/r/kaixhin/cuda-theano/ ) but in order to use GPU in my algorithm I need to attach Nvidia devices to the image:
Docker映像包含Ubuntu系统,CUDA驱动程序和Theano( https://hub.docker.com/r/kaixhin/cuda-theano/ ),但是为了在我的算法中使用GPU,我需要将Nvidia设备连接到映像:
docker run -it --device /dev/nvidiactl --device /dev/nvidia-uvm --device /dev/nvidia0 kaixhin/cuda-theano
Is there a way to do it in Windows, since I don't have a path /dev/nvidiactl
etc.? 由于我没有路径
/dev/nvidiactl
等,因此在Windows中有没有/dev/nvidiactl
? I have been looking for other Docker images but it seems that all of these are using Linux as the host system. 我一直在寻找其他Docker映像,但似乎所有这些映像都使用Linux作为主机系统。 Is there a version that will allow me to use GPU from Windows?
是否有允许我从Windows使用GPU的版本?
For now I can run my script in Docker, but it uses only my CPU: 现在,我可以在Docker中运行脚本,但它仅使用我的CPU:
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected)
In order to run CUDA Docker images you need NVIDIA Docker . 为了运行CUDA Docker映像,您需要NVIDIA Docker 。 Unfortunately, Theano is not supported as an official image at the moment but you can write your own Dockerfile leveraging nvidia/cuda
不幸的是,目前尚不支持Theano作为官方映像,但是您可以利用nvidia / cuda编写自己的Dockerfile。
Having said that, you won't be able to do it on Windows because Docker needs a Linux VM and there is no support for VM GPU passthrough on Windows. 话虽如此,您将无法在Windows上执行此操作,因为Docker需要Linux VM,并且Windows上不支持VM GPU直通。
You can try this image: 您可以尝试以下图像:
https://hub.docker.com/r/kaixhin/cuda-theano/ https://hub.docker.com/r/kaixhin/cuda-theano/
It requires nvidia-docker
它需要
nvidia-docker
nvidia-docker run -it kaixhin/cuda-theano
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.