简体   繁体   中英

How can I associate a GPU to each CPU

I have a question :

Let's say I have 2 GPU:s in my system and I have 2 host processes running cuda code. How can I be sure that each takes a GPU?

I'm considering setting exclusive_thread but I cannot understand how to get advantage of it: once I check that a device is free how can I be sure that it remains free until I do a cudaSetDevice?

EDIT:

So far I've tried this:

int devN = 0; 
while (cudaSuccess != cudaSetDevice(devN))devN = (devN + 1) % 2; 

but I get a

CUDA Runtime API error 77: an illegal memory access was encountered.

which is not strange since I am in EXCLUSIVE_PROCESS mode.

Two elements within this question. Assigning a process to a GPU and making sure a GPU is available for a single process.

Assigning a process to a GPU

There is a simple way to accomplish this using CUDA_VISIBLE_DEVICES environment variable: start you first process with CUDA_VISIBLE_DEVICES=0 and your second process with CUDA_VISIBLE_DEVICES=1 . Each process will see a single GPU, with device index 0, and will see a different GPU.

Running nvidia-smi topo -m will display GPU topology and provide you with the corresponding CPU affinity.

Then, you may set CPU affinity for your process with taskset or numactl on linux or SetProcessAffinityMask on Windows.

Process has exclusive access to a GPU

To make sure that no other process may access your GPU, configure the GPU driver to be in exclusive process: nvidia-smi --compute-mode=1 .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM