简体   繁体   English

使用 SLURM 请求 2 个 GPU 并运行 1 个 python 脚本

[英]Requesting 2 GPUs using SLURM and running 1 python script

I am trying to allocate 2 GPUs and run 1 python script over these 2 GPUs.我正在尝试分配 2 个 GPU 并在这 2 个 GPU 上运行 1 个 python 脚本。 The python script requires the variables $AMBERHOME, which is obtained by sourcing the amber.sh script, and $CUDA_VISIBLE_DEVICES. python 脚本需要变量 $AMBERHOME(通过 amber.sh 脚本获取)和 $CUDA_VISIBLE_DEVICES。 The $CUDA_VISIBLE_DEVICES variable should equal something like 0,1 for the two GPUS I have requested.对于我请求的两个 GPU,$CUDA_VISIBLE_DEVICES 变量应该等于 0,1。

Currently, I have been experimenting with this basic script.目前,我一直在试验这个基本脚本。

#!/bin/bash
#
#BATCH --job-name=test
#SBATCH --output=slurm_info
#SBATCH --nodes=2
#SBATCH --ntasks=2
#SBATCH --time=5:00:00
#SBATCH --partition=gpu-v100

## Prepare Run
source /usr/local/amber20/amber.sh
export CUDA_VISIBLE_DEVICES=0,1

## Perform Run
python calculations.py

When I run the script, I can see that 2 GPUs are requested.当我运行脚本时,我可以看到请求了 2 个 GPU。

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
11111   GPU         test   jsmith CF       0:02     2     gpu-[1-2]

When I look at the output ('slurm_info') I see,当我查看 output ('slurm_info') 我看到了,

cpu-bind=MASK - gpu-1, task  0  0 [10111]: mask 0x1 set

and of course information about the failed job.当然还有关于失败工作的信息。

Typically when I run this script on my local workstation, I have 2 GPUs there and when entering nvidia-smi into the command line, I see...通常,当我在本地工作站上运行此脚本时,那里有 2 个 GPU,当在命令行中输入 nvidia-smi 时,我看到...

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03    Driver Version: 460.91.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  On   | 00000000:00:1E.0 Off |                    0 |
| N/A   29C    P0    24W / 300W |      0MiB / 16160MiB |      0%   E. Process |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  GeForce RTX 208...  On   | 00000000:00:1E.0 Off |                    0 |
| N/A   29C    P0    24W / 300W |      0MiB / 16160MiB |      0%   E. Process |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

However, when I use nvidia-smi with my previous batch script on the cluster I see the following.但是,当我在集群上将 nvidia-smi 与我以前的批处理脚本一起使用时,我看到以下内容。

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03    Driver Version: 460.91.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  On   | 00000000:00:1E.0 Off |                    0 |
| N/A   29C    P0    24W / 300W |      0MiB / 16160MiB |      0%   E. Process |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

This makes me think that when the python script runs it is only seeing the one GPU.这让我觉得当 python 脚本运行时,它只会看到一个 GPU。

You are requesting two nodes , not two GPUs .您正在请求两个节点,而不是两个GPU The correct syntax for requesting GPUs depends on the Slurm version and how your cluster is set up.请求 GPU 的正确语法取决于 Slurm 版本以及集群的设置方式。 But you generally use #SBATCH -G 2 to request two GPUs.但是您通常使用#SBATCH -G 2来请求两个 GPU。

Slurm usually also takes care of CUDA_VISIBLE_DEVICES for you, so no need for that. Slurm 通常也会为您处理CUDA_VISIBLE_DEVICES ,所以不需要。 Try this:尝试这个:

#!/bin/bash
#
#BATCH --job-name=test
#SBATCH --output=slurm_info
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2 #change that accordingly
#SBATCH -G 2
#SBATCH --time=5:00:00
#SBATCH --partition=gpu-v100

## Prepare Run
source /usr/local/amber20/amber.sh

## Perform Run
python calculations.py

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM