简体   繁体   English

Raspberry Pi 3上的OpenCV多个USB摄像头

[英]OpenCV multiple USB camera on Raspberry Pi 3

I've looked at lots of previous questions related to this, and none have helped. 我之前看过很多与此相关的问题,但都无济于事。

My setup: 我的设置:

  • One of these 其中的这些
    • they show up as /dev/video0 and /dev/video1 它们显示为/dev/video0/dev/video1
    • Images are 640 x 480 图片为640 x 480
  • Raspberry Pi 3 树莓派3
  • Raspbian Jessie 覆盆子杰西
  • OpenCV 3.1.0 OpenCV 3.1.0
  • Python 2.7 Python 2.7

For either one of the cameras I can capture images and display them at a pretty decent rate with minimal latency (and occasional artifacts). 对于这两个摄像机中的任何一个,我都可以捕获图像并以相当不错的速率显示它们,并且延迟最小(偶尔出现伪像)。

When I try to use both, however, I get maybe a 10th the frame rate (although the delay between frames seems to vary wildly with each frame) with all sorts of nasty image artifacts (see below for example) and an intolerable amount of lag. 但是,当我尝试同时使用两者时, 可能会得到十分之一的帧速率(尽管帧之间的延迟似乎随每个帧而变化),并带有各种令人讨厌的图像伪影(例如,参见下文),并且出现了无法忍受的延迟。

文物

The problem does not seem to be the camera itself or USB bandwidth on the device: when I connect the cameras to my Windows PC, I am able to capture and display at 30 FPS without any visual artifacts and very little lag. 问题似乎在于相机本身或设备上的USB带宽:将相机连接到Windows PC时,我能够以30 FPS的速度捕获和显示图像,而没有任何视觉伪像且几乎没有滞后。

As best I can tell, it must be the Pi hardware, the drivers or OpenCV which is the problem. 据我所知,这一定是Pi硬件,驱动程序或OpenCV。 I don't think it's the Pi hardware.. I would be happy if I could achieve with two cameras half the frame rate I get with one camera (and I don't see why that shouldn't be possible) and no ugly artifacts. 我不认为这是Pi硬件。如果我能用两台摄像机实现一台摄像机获得的一半帧速(我不明白为什么不应该这样做)并且没有丑陋的伪像,我会很高兴。

Does anyone have any suggestions? 有没有人有什么建议? I'm ultimately just trying to stream the video from the two cameras from my Pi to my desktop. 我最终只是想将来自两台摄像机的视频从Pi传输到桌面。 If there are suggestions that don't involve OpenCV, I'm all ears; 如果有不涉及OpenCV的建议,我会全力以赴。 I am not trying to do any rendering or manipulation of the images on the Pi, but openCV is the only thing I've found that captures images even reasonably quickly (with one camera, of course). 我没有尝试对Pi上的图像进行任何渲染或操作,但是openCV是我发现的唯一能够以相当快的速度(当然,使用一台摄像机)捕获图像的东西。

Just for reference, the simple python script I'm using is this: 仅供参考,我正在使用的简单python脚本是这样的:

import cv2
import numpy as np
import socket
import ctypes
import struct

cap = []
cap.append(cv2.VideoCapture(0))
cap.append(cv2.VideoCapture(1))

#grab a single frame from one camera
def grab(num):
    res, im = cap[num].read()
    return (res,im)

#grab a frame from each camera and stitch them
#side by side
def grabSBS():
    res, imLeft  = grab(1)
    #next line is for pretending I have 2 cameras
    #imRight = imLeft.copy()
    res, imRight = grab(0)
    imSBS = np.concatenate((imLeft, imRight), axis=1)
    return res,imSBS

###For displaying locally instead of streaming
#while(False):
#    res, imLeft = grab(0)
#    imRight = imLeft.copy()
#    imSBS = np.concatenate((imLeft, imRight), axis=1)
#    cv2.imshow("win", imSBS)
#    cv2.waitKey(20)

header_data = ctypes.create_string_buffer(12)

while(True):
    sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    sck.bind(("10.0.0.XXX", 12321))

    sck.listen(1)

    while(True):
        (client, address) = sck.accept()
        print "Client connected:", address
        try:
            while(True):
            res,im = grabSBS()
            if(res):
                success, coded = cv2.imencode('.jpg', im)
                if (success):
                    height, width, channels = im.shape
                    size = len(coded)
                    struct.pack_into(">i", header_data , 0, width)
                    struct.pack_into(">i", header_data , 4, height)
                    struct.pack_into(">i", header_data , 8, size)
                    client.sendall(header_data .raw)
                    client.sendall(coded.tobytes())
        except Exception as ex:
            print "ERROR:", ex
            client.close()
            sck.close()
            exit()

UPDATE : I got it working much, much better by adding the following lines of code after initializing the VideoCapture objects: 更新 :通过初始化VideoCapture对象后添加以下代码行,我可以使它工作得更好,而且效果更好:

cap[0].set(cv2.CAP_PROP_FPS, 15)
cap[1].set(cv2.CAP_PROP_FPS, 15)

This both lowers the bandwidth required and the openCV workload. 这既降低了所需的带宽,又降低了openCV的工作量。 I still get those horrible artifacts every few frames, so if anyone has advice on that I'm happy to hear it. 每隔几帧我仍然会收到那些可怕的文物,因此,如果有人对此有所建议,我很高兴听到。

Well, after spending about 5 hours fighting with it, I seem to have found solutions. 好吧,在花费大约5个小时与之抗争之后,我似乎已经找到了解决方案。

First, apparently OpenCV was trying to capture at 30 FPS even though I wasn't able to pull frames at 30 FPS. 首先,即使我无法以30 FPS的速度拉帧,显然OpenCV还是试图以30 FPS的速度捕获。 I changed the VideoCapture frame rate to 15 FPS and the video became much, much smoother and faster. 我将VideoCapture的帧速率更改为15 FPS,并且视频变得更加流畅和快速。

cap[0].set(cv2.CAP_PROP_FPS, 15.0)
cap[1].set(cv2.CAP_PROP_FPS, 15.0)

That didn't get rid of the artifacts, though. 但是,这并没有消除工件。 I eventually found that if I do del(im) after sending the image over the network, the artifacts completely went away. 最终,我发现如果通过网络发送图像后再执行del(im) ,则工件完全消失了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM