简体   繁体   English

如何在 jupyter notebook 中使用远程机器的 GPU

[英]How to use a remote machine's GPU in jupyter notebook

I am trying to run tensorflow on a remote machine's GPU through Jupyter notebook.我正在尝试通过 Jupyter notebook 在远程机器的 GPU 上运行 tensorflow。 However, if I print the available devices using tf, I only get CPUs.但是,如果我使用 tf 打印可用设备,我只会得到 CPU。 I have never used a GPU before and am relatively new at using conda / jupyter notebook remotely as well, so I am not sure how to set up using the GPU in jupyter notebook.我以前从未使用过 GPU,并且在远程使用 conda / jupyter notebook 方面也相对较新,所以我不确定如何在 jupyter notebook 中使用 GPU 进行设置。

I am using an environment set up by someone else who already executed the same code on the same GPU, but they did it via python script, not in a jupyter notebook.我正在使用由已经在同一 GPU 上执行相同代码的其他人设置的环境,但他们是通过 python 脚本完成的,而不是在 jupyter notebook 中。

this is the only code in the other person's file that had to do with the GPU 这是其他人文件中唯一与 GPU 相关的代码

config = tf.ConfigProto()配置 = tf.ConfigProto()

config.gpu_options.allow_growth=True config.gpu_options.allow_growth=True

set_session(tf.Session(config=config)) set_session(tf.Session(config=config))

I think the problem was that I had tensorflow in my environment instead of tensorflow-gpu.我认为问题在于我的环境中有 tensorflow 而不是 tensorflow-gpu。 But now I get this message "cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version" and I don't know how to update the driver through terminal但是现在我收到这条消息“cudaGetDevice() failed. Status: CUDA driver version is enough for CUDA runtime version”,我不知道如何通过终端更新驱动程序

How is your environment set up?你的环境是怎么设置的? Specifically, what is your remote environment, and what is your local environment?具体来说,你的远程环境是什么,你的本地环境是什么? Sounds like your CUDA drivers are out of date, but it could be more than just that.听起来您的 CUDA 驱动程序已经过时,但可能不止于此。 If you are just getting started, I would recommend finding an environment that requires little to no configuration work on your part, so you can get started more easily/quickly.如果您刚刚开始,我建议您找到一个几乎不需要您进行配置工作的环境,这样您就可以更轻松/快速地开始使用。

For example, you can run GPUs on the cloud, and connect to them via local terminal.例如,您可以在云端运行 GPU,并通过本地终端连接到它们。 You also have your "local" frontend be Colab by connecting it to a local runtime.通过将“本地”前端连接到本地运行时,您还可以将其作为 Colab。 ( This video explains that particular setup , but there's lots of other options) 此视频解释了该特定设置,但还有很多其他选项)

You may also want to try running nvidia-smi on the remote machine to see if the GPUs are visible.您可能还想尝试在远程机器上运行nvidia-smi以查看 GPU 是否可见。

Here is another solution, that describes how to set up a GPU-Jupyterlab instance with Docker. 是另一个解决方案,描述了如何使用 Docker 设置 GPU-Jupyterlab 实例。

To update your drivers via terminal, run:要通过终端更新驱动程序,请运行:

ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
sudo reboot

Are your CUDA paths set appropriately?您的 CUDA 路径设置是否正确? Like that?像那样?

export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM