简体   繁体   English

BeagleBone Black OpenCV Python太慢了

[英]BeagleBone Black OpenCV Python is too slow

I try to get images from webcam wtih opencv and python. 我尝试用opencv和python从网络摄像头获取图像。 Code is so basic like: 代码是如此基本:

import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT,480)
cap.set(cv2.cv.CV_CAP_PROP_FPS, 20)

a=30
t=time.time()
while (a>0):
        now=time.time()
        print now-t
        t=now
        ret,frame=cap.read()
        #Some processes
        print a,ret
        print frame.shape
        a=a-1
        k=cv2.waitKey(20)
        if k==27:
                break
cv2.destroyAllWindows()

But it works slowly. 但它运作缓慢。 output of program: 程序输出:

VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
HIGHGUI ERROR: V4L: Property <unknown property string>(5) not supported by device
8.82148742676e-06
select timeout
30 True
(480, 640, 3)
2.10035800934
select timeout
29 True
(480, 640, 3)
2.06729602814
select timeout
28 True
(480, 640, 3)
2.07144904137
select timeout

Configuration: 组态:

  • Beaglebone Black RevC Beaglebone Black RevC
  • Debian-wheezly Debian的wheezly
  • opencv 2.4 opencv 2.4
  • python 2.7 python 2.7

The "secret" to obtaining higher FPS when processing video streams with OpenCV is to move the I/O (ie, the reading of frames from the camera sensor) to a separate thread. 在使用OpenCV处理视频流时获得更高FPS的“秘密”是将I / O(即,从相机传感器读取帧)移动到单独的线程。

When calling read() method along with cv2.VideoCapture function, it makes the entire process very slow as it has to wait for each I/O operation to be completed for it to move on to the next one ( Blocking Process ). 当调用read()方法和cv2.VideoCapture函数时,它会使整个过程非常慢,因为它必须等待每个I / O操作完成才能继续下一个( 阻塞过程 )。

In order to accomplish this FPS increase/latency decrease, our goal is to move the reading of frames from a webcam or USB device to an entirely different thread, totally separate from our main Python script. 为了实现FPS增加/延迟减少,我们的目标是将帧的读取从网络摄像头或USB设备移动到完全不同的线程,完全独立于我们的主Python脚本。

This will allow frames to be read continuously from the I/O thread, all while our root thread processes the current frame. 这将允许从I / O线程连续读取帧,同时我们的根线程处理当前帧。 Once the root thread has finished processing its frame, it simply needs to grab the current frame from the I/O thread. 一旦根线程完成处理它的帧,它只需要从I / O线程中获取当前帧。 This is accomplished without having to wait for blocking I/O operations. 无需等待阻止I / O操作即可完成此操作。

You can read Increasing webcam FPS with Python and OpenCV to know the steps in implementing threads. 您可以阅读使用Python和OpenCV增加网络摄像头FPS以了解实现线程的步骤。


EDIT 编辑

Based on the discussions in our comments, I feel you could rewrite the code as follows: 根据我们评论中的讨论,我觉得您可以按如下方式重写代码:

import cv2

cv2.namedWindow("output")
cap = cv2.VideoCapture(0)

if cap.isOpened():              # Getting the first frame
    ret, frame = cap.read()
else:
    ret = False

while ret:
    cv2.imshow("output", frame)
    ret, frame = cap.read()
    key = cv2.waitKey(20)
    if key == 27:                    # exit on Escape key
        break
cv2.destroyWindow("output")

I encountered a similar problem when I was working on a project using OpenCV 2.4.9 on the Intel Edison platform. 当我在英特尔Edison平台上使用OpenCV 2.4.9开展项目时,我遇到了类似的问题。 Before doing any processing, it was taking roughly 80ms just to perform the frame grab. 在进行任何处理之前,仅需要大约80ms来执行帧抓取。 It turns out that OpenCV's camera capture logic for Linux doesn't seem to be implemented properly, at least in the 2.4.9 release. 事实证明,OpenCV的Linux摄像头捕获逻辑似乎没有得到正确实现,至少在2.4.9版本中是这样。 The underlying driver only uses one buffer, so it's not possible to use multi-threading in the application layer to work around it - until you attempt to grab the next frame, the only buffer in the V4L2 driver is locked. 底层驱动程序只使用一个缓冲区,因此无法在应用程序层中使用多线程来解决它 - 直到您尝试获取下一帧,V4L2驱动程序中唯一的缓冲区被锁定。

The solution is to not use OpenCV's VideoCapture class. 解决方案是不使用OpenCV的VideoCapture类。 Maybe it was fixed to use a sensible number of buffers at some point, but as of 2.4.9, it wasn't. 也许修复了在某些时候使用合理数量的缓冲区,但从2.4.9开始,它不是。 In fact, if you look at this article by the same author as the link provided by @Nickil Maveli, you'll find that as soon as he provides suggestions for improving the FPS on a Raspberry Pi, he stops using OpenCV's VideoCapture. 事实上,如果你看一下条由同一作者通过@Nickil Maveli提供的链接,你会马上发现,他提供了一种提高对树莓派FPS的建议,他将停止使用的OpenCV的VideoCapture。 I don't believe that is a coincidence. 我不相信这是巧合。

Here's my post about it on the Intel Edison forum: https://communities.intel.com/thread/58544 . 以下是我在英特尔Edison论坛上发布的帖子: https//communities.intel.com/thread/58544

I basically wound up writing my own class to handle the frame grabs, directly using V4L2. 我基本上编写了自己的类来处理帧抓取,直接使用V4L2。 That way you can provide a circular list of buffers and allow the frame grabbing and application logic to be properly decoupled. 这样,您可以提供循环缓冲区列表,并允许帧抓取和应用程序逻辑正确解耦。 That was done in C++ though, for a C++ application. 对于C ++应用程序,这在C ++中完成。 Assuming the above link delivers on its promises, that might be a far easier approach. 假设上述链接兑现了它的承诺,这可能是一种更容易的方法。 I'm not sure whether it would work on BeagleBone, but maybe there's something similar to PiCamera out there. 我不确定它是否适用于BeagleBone,但也许有类似于PiCamera的东西。 Good luck. 祝好运。

EDIT: I took a look at the source code for 2.4.11 of OpenCV. 编辑:我看了一下OpenCV 2.4.11的源代码。 It looks like they now default to using 4 buffers, but you must be using V4L2 to take advantage of that. 看起来他们现在默认使用4个缓冲区,但您必须使用V4L2来利用它。 If you look closely at your error message HIGHGUI ERROR: V4L: Property... , you see that it references V4L, not V4L2. 如果仔细查看错误消息HIGHGUI ERROR: V4L: Property... ,您会看到它引用的是V4L,而不是V4L2。 That means that the build of OpenCV you're using is falling back on the old V4L driver. 这意味着你正在使用的OpenCV的构建正在回归旧的V4L驱动程序。 In addition to the singular buffer causing performance issues, you're using an ancient driver that probably has many limitations and performance problems of its own. 除了导致性能问题的单一缓冲区之外,您还使用了一个古老的驱动程序,它本身可能存在许多限制和性能问题。

Your best bet would be to build OpenCV yourself to make sure that it uses V4L2. 您最好的选择是自己构建OpenCV以确保它使用V4L2。 If I recall correctly, the OpenCV configuration process checks whether the V4L2 drivers are installed on the machine and builds it accordingly, so you'll want to make sure that V4L2 and any related dev packages are installed on the machine you use to build OpenCV. 如果我没记错的话,OpenCV配置过程会检查机器上是否安装了V4L2驱动程序并相应地构建它,因此您需要确保在用于构建OpenCV的机器上安装了V4L2和任何相关的开发包。

try this one ! 试试这个吧! I replaced some code in the cap.set() section 我替换了cap.set()部分中的一些代码

import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
cap.set(5, 20)

a=30
t=time.time()
while (a>0):
        now=time.time()
        print now-t
        t=now
        ret,frame=cap.read()
        #Some processes
        print a,ret
        print frame.shape
        a=a-1
        k=cv2.waitKey(20)
        if k==27:
                break
cv2.destroyAllWindows()

output (pc webcam) your code was wrong for me. 输出(电脑摄像头)你的代码对我来说是错误的。

>>0.0
>>30 True
>>(480, 640, 3)
>>0.246999979019
>>29 True
>>(480, 640, 3)
>>0.0249998569489
>>28 True
>>(480, 640, 3)
>>0.0280001163483
>>27 True
>>(480, 640, 3)
>>0.0320000648499

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM