简体   繁体   中英

Specify GPU device with OpenCV Python API

When working on a project using Tensorflow (w/ GPU support) to process some features extracted from OpenCV (3.4.3, using Python API) captures in real time, I am getting the following error from cuDNN whenever I try to read from the capture after I start the tf session:

F1028 02:37:31.456640 xxxxx cudnn_conv_layer.cu:28] Check failed: status == CUDNN_STATUS_SUCCESS (8 vs. 0) CUDNN_STATUS_EXECUTION_FAILED

I suppose that the issue is with OpenCV and Tensorflow both using GPU via CUDA at the same time and that the GPU ran out of memory. The workaround I am currently using is to start capturing with OpenCV first, and then start the tf session whenever it is actually needed. This way, TensorFlow knows that the GPU is busy and opted to use CPU only. However, the frame rate drops significantly as a result.

Considering that I only use OpenCV for capturing and basic preprocessing, I don't think GPU support is necessary and it would be preferable to let TensorFlow use the GPU.

Is there a way to specify which GPU device to use with OpenCV Python API (or if it should use GPU at all)? I see with the C++ API there is a setDevice() method under the gpu namespace. Is there an equivalent for the Python API?

Here is the equivalent call for the Python API, it is under 'cuda':

cv2.cuda.setDevice(int device_id)

Source

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM