[英]Is there a way to avoid buffering images when using RTSP camera in OpenCV?
Let's say we have a time consuming process
that when we call it on any frame, it takes about 2 seconds for it to complete the operation.假设我们有一个耗时的
process
,当我们在任何帧上调用它时,大约需要 2 秒才能完成操作。
As we capture our frames with Videocapture
, frames are stored in a buffer behind the scene(maybe it's occurred in the camera itself), and when process
on the nth frame completes, it grabs next frame ((n+1)th frame) , However, I would like to grab the frames in real-time, not in order(ie skip the intermediate frames)当我们使用
Videocapture
捕获我们的帧时,帧存储在场景后面的缓冲区中(可能它发生在相机本身中),当第 n 帧的process
完成时,它会抓取下一帧(第(n+1)帧) ,但是,我想实时抓取帧,而不是按顺序(即跳过中间帧)
for example you can test below example例如你可以测试下面的例子
cv::Mat frame;
cv::VideoCapture cap("rtsp url");
while (true) {
cap.read(frame);
cv::imshow("s",frame);
cv::waitKey(2000); //artificial delay
}
Run the above example, and you will see that the frame that shows belongs to the past, not the present.运行上面的例子,你会看到显示的帧属于过去,而不是现在。 I am wondering how to skip those frames?
我想知道如何跳过这些帧?
After you start videoCapture , you can use the following function in OpenCV.启动videoCapture后,您可以在 OpenCV 中使用以下函数。
videoCapture cap(0);
cap.set(CV_CAP_PROP_BUFFERSIZE, 1); //now the opencv buffer just one frame.
The task was completed using multi-threaded programming.该任务是使用多线程编程完成的。 May I ask what's the point of this question?
请问这个问题的意义何在? If you are using a weak processor like raspberrypi or maybe you have a large algorithm that takes a long time to run, you may experience this problem, leading to a large lag in your frames.
如果您使用的是 raspberrypi 之类的弱处理器,或者您有一个需要很长时间运行的大型算法,您可能会遇到此问题,从而导致帧出现较大延迟。
This problem was solved in Qt by using the QtConcurrent class that has the ability to run a thread easily and with little coding.这个问题在 Qt 中通过使用 QtConcurrent 类来解决,该类能够轻松运行线程并且几乎不需要编码。 Basically, we run our main thread as a processing unit and run a thread to continuously capture frames, so when the main thread finishes processing on one specific frame, it asks for another frame from the second thread.
基本上,我们将主线程作为一个处理单元运行并运行一个线程来连续捕获帧,因此当主线程完成对一个特定帧的处理时,它会从第二个线程请求另一帧。 In the second thread, it is very important to capture frames without delay.
在第二个线程中,毫不延迟地捕获帧非常重要。 Therefore, if a processing unit takes two seconds and a frame is captured in 0.2 seconds, we will lose middle frames.
因此,如果一个处理单元需要 2 秒,而在 0.2 秒内捕获一帧,我们就会丢失中间帧。
The project is attached as follows项目附后如下
1.main.cpp 1.main.cpp
#include <opencv2/opencv.hpp>
#include <new_thread.h> //this is a custom class that required for grab frames(we dedicate a new thread for this class)
#include <QtConcurrent/QtConcurrent>
using namespace cv;
#include <QThread> //need for generate artificial delay, which in real situation it will produce by weak processor or etc.
int main()
{
Mat frame;
new_thread th; //create an object from new_thread class
QtConcurrent::run(&th,&new_thread::get_frame); //start a thread with QtConcurrent
QThread::msleep(2000); //need some time to start(open) camera
while(true)
{
frame=th.return_frame();
imshow("frame",frame);
waitKey(20);
QThread::msleep(2000); //artifical delay, or a big process
}
}
2.new_thread.h 2.new_thread.h
#include <opencv2/opencv.hpp>
using namespace cv;
class new_thread
{
public:
new_thread(); //constructor
void get_frame(void); //grab frame
Mat return_frame(); //return frame to main.cpp
private:
VideoCapture cap;
Mat frame; //frmae i
};
3.new_thread.cpp 3.new_thread.cpp
#include <new_thread.h>
#include <QThread>
new_thread::new_thread()
{
cap.open(0); //open camera when class is constructing
}
void new_thread::get_frame() //get frame method
{
while(1) // while(1) because we want to grab frames continuously
{
cap.read(frame);
}
}
Mat new_thread::return_frame()
{
return frame; //whenever main.cpp need updated frame it can request last frame by usign this method
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.