简体   繁体   English

使用OpenCV的2个摄像头(用于立体视觉)的视频,但其中一个是滞后的

[英]Video from 2 cameras (for Stereo Vision) using OpenCV, but one of them is lagging

I'm trying to create Stereo Vision using 2 logitech C310 webcams. 我正在尝试使用2个罗技C310网络摄像头创建立体视觉。 But the result is not good enough. 但结果还不够好。 One of the videos is lagging as compared to the other one. 其中一个视频与另一个视频相比滞后。

Here is my openCV program using VC++ 2010: 这是我使用VC ++ 2010的openCV程序:

#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>

using namespace cv;
using namespace std;

int main()
{
    try
    {
        VideoCapture cap1;
        VideoCapture cap2;

        cap1.open(0);
        cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
        cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);

        cap2.open(1);  
        cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
        cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
        Mat frame,frame1;

        for (;;)
        {
            Mat frame;
            cap1 >> frame;

            Mat frame1;
            cap2 >> frame1;

            transpose(frame, frame);
            flip(frame, frame, 1);

            transpose(frame1, frame1);
            flip(frame1, frame1, 1);

            imshow("Img1", frame);
            imshow("Img2", frame1);

            if (waitKey(1) == 'q')
                break;
        }

        cap1.release();
        return 0;
    }
    catch (cv::Exception & e)
    {
        cout << e.what() << endl;
    }
}

How can I avoid the lagging? 我怎样才能避免滞后?

you're probably saturating the usb bus. 你可能正在使usb公共汽车饱和。

try to plug one in front, the other in the back(in the hope to land on different buses), 尝试将一个插在前面,另一个插在后面(希望降落在不同的公交车上),

or reduce the frame size / FPS to generate less traffic. 或减小帧大小/ FPS以产生更少的流量。

I'm afraid you can't do it like this. 我担心你不能这样做。 The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever. opencv Videocapture真的只是用于测试,它使用最简单的底层操作系统功能,并没有真正尝试做任何聪明的事情。

In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them. 此外,即使您可以找到较低级别的API与它们通信,简单的网络摄像头也无法实现可同步控制。

If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames. 如果您需要为项目使用简单的USB网络摄像头,最简单的方法是让外部定时LED以几赫兹闪烁并检测每个摄像头中的光并使用它来同步帧。

I know this post is getting quite old but I had to deal with the same problem recently so... 我知道这篇文章已经老了但我最近不得不处理同样的问题所以......

I don't think you were saturating the USB bus. 我认为你没有让USB总线饱和。 If you were, you should have had an explicit message in the terminal. 如果你是,你应该在终端有一个明确的消息。 Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag. 实际上,VideoCapture对象的创建速度很慢,我很确定这是你滞后的原因:你初始化你的第一个VideoCapture对象cap1,cap1开始抓取帧,你初始化你的第二个VideoCapture cap2,cap2开始抓取帧然后你开始从cap1和cap2获取你的帧,但是cap1存储的第一帧比cap2存储的帧早,所以...你有一个滞后。

What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). 如果你真的想使用opencv那么你应该做的是添加一些线程:一个处理左帧而另一个处理右帧,除了保存最后一帧之外什么都不做(所以你总是处理最新的仅限帧)。 If you want to get your frames, you'll just have to get them from theses threads. 如果你想获得你的帧,你只需要从这些线程中获取它们。

I've done a little something if you need here . 我已经做了一些东西,如果你需要在这里

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM