简体   繁体   中英

How to train haar cascade on gpu

I am trying to train a cascade using opencv's opencv_traincascade method. But the program is very slow as it run on the CPU. Is there any way I can the program on a GPU for faster training. I tried following this link

https://www.cerebrumedge.com/single-post/2017/12/26/Compiling-OpenCV-with-CUDA-and-FFMpeg-on-Ubuntu-1604 , but later realized that it does not work for python, atleast according to this link OpenCV 3.2 CUDA support python . But the person has also mentioned that one can use opencl with python.

Is there anyway I can use opencl to train the cascade on a GPU?

Thanks in advance

I don't know how to use GPU computing, but I used parallel computing on the CPU and the speed was increased by 4 times.

处理器状态

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM