[英]Feasability of using OpenCV with Python over a GPU on Windows
I have to read images coming from a camera, add some shapes with OpenCV and return the image to a webpage with Flask.我必须读取来自相机的图像,使用 OpenCV 添加一些形状,然后使用 Flask 将图像返回到网页。 This application is running in Windows 10. The detailed steps are:
此应用程序在 Windows 10 中运行。详细步骤为:
cv2.imencode
(the OpenCV Python wrapper).cv2.imencode
(OpenCV Python 包装器)将图像从原始格式转换为 JPG 或 PNG 格式。 Nevertheless, this slows down the frame rate of the video to be shown on the website. I've seen at some places, that OpenCV can be used to perform some operations over the GPU (Nvidia GeForce RTX).我在某些地方看到,OpenCV 可用于在 GPU(Nvidia GeForce RTX)上执行一些操作。 I've tried with
cv2.UMat
just before calling cv2.imencode
but there is no difference.我在调用
cv2.imencode
之前尝试过使用cv2.UMat
,但没有区别。
And here is my question:这是我的问题:
I finally compiled OpenCV-python with the GPU flags, with the needed CUDA flags.我终于用 GPU 标志和所需的 CUDA 标志编译了OpenCV-python 。 It took me weeks to install all the needed dependencies (image libraries, video libraries, Fortran, LINPAK, BLAS...).
我花了数周时间安装所有需要的依赖项(图像库、视频库、Fortran、LINPAK、BLAS...)。
The result is that the call to cv2.imencode
is way faster and I can see the GPU consumption increased in the resources manager of Windows.结果是对
cv2.imencode
的调用更快,我可以看到 Windows 的资源管理器中的 GPU 消耗增加。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.