简体   繁体   中英

dlib cuda not using GPU

I wanted to improve my CNN facial recognition code by using Nvidia GPU instead of CPU. So I found and install specific dlib_cuda following these instructions .

Installation went well, and so I checked if dlib using cuda in my Python environment:

Python 3.6.9 (default, Jul  17 2020, 12:50:27)
[GCC 8.4.0] on linux
>>> import dlib
>>> dlib.DLIB_USE_CUDA
True
>>>print(dlib.cuda.get_device())
1

Because it looked good for me, I tried to use my code again, but they was no improvement, and after checking GPU, it is still not use at all. So I tried the following command:

>>> print(dlib.cuda.get_device())

And it returns:

0

I'm not sure of what these this messages means. After a lot of researches and I cannot still figure with dlib doesn't use my GPU. Is someone faced the same issue before?

My workspace is on a Jetson AGX Xavier (Jetpack 4.4), running with Ubuntu and Cuda version 10.2.89

PS: I also use tensorFlow and Keras libraries, both of them are install to work with CUDA environments

If you are familiar with Tensorflow and Keras, I recommend you to use deepface for face recognition. It wraps state-of-the-art face recognition models and builds Keras models in the background. That's why, its default usage will perform on GPU if you installed tensorflow-gpu package.

#!pip install deepface
from deepface import DeepFace
models = ["VGG-Face", "Facenet", "OpenFace", "DeepFace", "DeepID"]
resp = DeepFace.verify("img1.jpg", "img2.jpg", model_name = models[0])
print(resp["verified"])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM