简体   繁体   中英

How can i tell PyCUDA which GPU to use?

I have two NVidia cards in my machine, and both are CUDA capable. When I run the example script to get started with PyCUDA seen here: http://documen.tician.de/pycuda/ i get the error

nvcc fatal   : Value 'sm_30' is not defined for option 'gpu-architecture'

My computing GPU is compute capability 3.0, so sm_30 should be the right option for the nvcc compiler. My graphics GPU is only CC 1.2, so i thought maybe that's the problem. I've installed the CUDA 5.0 release for linux with no errors, and all the compiler components and python components.

Is there a way to tell PyCUDA explicitly which GPU to use?

nvcc isn't going to complain based on the specific GPUs you have installed. It will compile for whatever GPU type you tell it to compile for. The problem is you are specifying sm_30 which is not a valid option for --gpu-architecture when a --gpu-code option is also specified.

You should be passing compute_30 for --gpu-architecture and sm_30 for --gpu-code

Also be sure you have the correct nvcc in use and are not inadvertently using some old version of the CUDA toolkit.

Once you have the compile problem sorted out, there is an environment variable CUDA_DEVICE that pycuda will observe to select a particular installed GPU.

From here :

CUDA_DEVICE=2 python my-script.py

By the way someone else had your problem. Are you sure you don't have an old version of the CUDA toolkit laying around that PyCUDA is using?

I don't know about Python wrapper( or about Python in general), but in C++ you have WGL_NV_gpu_affinity NVidia extension which allows you to target a specific GPU. Probably you can write a wrapper for it in Python.

EDIT:

Now that I see you are actually running Linux, the solution is simpler (C++).You just need to enumerate XDisplay before context init.

So basically the default GPU is usually targeted with Display string "0.0"

To open display with second GPU you can do something like this:

    const char* gpuNum = "0:1";

    if (!(_display = XOpenDisplay(gpuNum ))) {

        printf("error: %s\n", "failed to open display");

    } else {
        printf("message: %s\n", "display created");

    }

       ////here comes the rest of context setup....

At lest currently, it seems possible to just say

import pycuda.driver as drv
drv.Device(6).make_context()

and this sets Device 6 as current context .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM