简体   繁体   English

在 Python 中使用 OpenCV VideoCapture 获取当前帧

[英]Getting current frame with OpenCV VideoCapture in Python

I am using cv2.VideoCapture to read the frames of an RTSP video link in a python script.我正在使用 cv2.VideoCapture 在 python 脚本中读取 RTSP 视频链接的帧。 The .read() function is in a while loop which runs once every second, However, I do not get the most current frame from the stream. .read() 函数在一个每秒运行一次的 while 循环中,但是,我没有从流中获取最新的帧。 I get older frames and in this way my lag builds up.我得到较旧的框架,这样我的滞后就会增加。 Is there anyway that I can get the most current frame and not older frames which have piped into the VideoCapture object?无论如何,我可以获得最新的帧,而不是通过管道传输到 VideoCapture 对象的旧帧吗?

I also faced the same problem.我也面临同样的问题。 Seems that once the VideoCapture object is initialized it keeps storing the frames in some buffer of sort and returns a frame from that for every read operation.似乎一旦 VideoCapture 对象被初始化,它就会将帧存储在某种排序缓冲区中,并为每次读取操作返回一个帧。 What I did is I initialized the VideoCapture object every time I wanted to read a frame and then released the stream.我所做的是每次我想读取一帧然后释放流时初始化 VideoCapture 对象。 Following code captures 10 images at an interval of 10 seconds and stores them.以下代码以 10 秒的间隔捕获 10 张图像并存储它们。 Same can be done using while(True) in a loop.在循环中使用 while(True) 也可以做到这一点。

for x in range(0,10):
    cap = cv2.VideoCapture(0)
    ret, frame = cap.read()
    cv2.imwrite('test'+str(x)+'.png',frame)
    cap.release()
    time.sleep(10)

I've encountered the same problem and found a git repository of Azure samples for their computer vision service.我遇到了同样的问题,并为他们的计算机视觉服务找到了Azure 示例的 git 存储库。 The relevant part is the Camera Capture module , specifically the Video Stream class .相关部分是Camera Capture 模块,特别是Video Stream 类

You can see they've implemented a Queue that is being updated to keep only the latest frame:您可以看到他们已经实现了一个正在更新的队列以仅保留最新帧:

def update(self):
    try:
        while True:
            if self.stopped:
                return

            if not self.Q.full():
                (grabbed, frame) = self.stream.read()

                # if the `grabbed` boolean is `False`, then we have
                # reached the end of the video file
                if not grabbed:
                    self.stop()
                    return

                self.Q.put(frame)

                # Clean the queue to keep only the latest frame
                while self.Q.qsize() > 1:
                    self.Q.get()

I'm working with a friend in a hack doing the same.我正在和一个黑客的朋友一起做同样的事情。 We don't want to use all the frames.我们不想使用所有的帧。 So far we found that very same thing: grab() (or read) tries to get you all the frames, and I guess with rtp: it will maintain a buffer and drop if you're not responsive enough.到目前为止,我们发现了同样的事情: grab() (或读取)试图获取所有帧,我猜想使用 rtp:如果您没有足够的响应,它将保持一个缓冲区并丢弃。

Instead of read you can also use grab() and receive().除了读取,您还可以使用grab() 和receive()。 First one ask for the frame.第一个要求框架。 Receives reads it into memory.接收将其读入内存。 So if you call grab several times it will effectively skip those.因此,如果您多次调用grab,它将有效地跳过这些。

We got away with doing this:我们逃脱了这样做:

#show some initial image
while True:
    cv2.grab()
    if cv2.waitKey(10):
       im = cv2.receive()
       # process
       cv2.imshow...

Not production code but...不是生产代码,但...

Using the following was causing a lot of issues for me.使用以下内容给我带来了很多问题。 The frames being passed to the function were not sequention.传递给函数的帧不是顺序的。

cap = cv2.VideoCapture(0)

while True:
    ret, frame = cap.read()
    function_that_uses_frame(frame)
    time.sleep(0.5)

The following also didn't work for me as suggested by other comments.正如其他评论所建议的那样,以下内容也对我不起作用。 I was STILL getting issues with taking the most recent frame.我仍然在获取最新帧时遇到问题。

cap = cv2.VideoCapture(0)

while True:
    ret = capture.grab()
    ret, frame = videocapture.retrieve()
    function_that_uses_frame(frame)
    time.sleep(0.5)

Finally, this worked but it's bloody filthy.最后,这行得通,但它非常肮脏。 I only need to grab a few frames per second, so it will do for the time being.我只需要每秒抓取几帧,所以暂时就可以了。 For context, I was using the camera to generate some data for an ML model and my labels compared to what was being captured was out of sync.就上下文而言,我正在使用相机为 ML 模型生成一些数据,并且我的标签与捕获的内容相比是不同步的。

while True:
    ret = capture.grab()
    ret, frame = videocapture.retrieve()
    ret = capture.grab()
    ret, frame = videocapture.retrieve()
    function_that_uses_frame(frame)
    time.sleep(0.5)

Inside the 'while' you can use:在“while”内,您可以使用:

while True:
    cap = cv2.VideoCapture()
    urlDir = 'rtsp://ip:port/h264_ulaw.sdp'
    cap.open(urlDir)
    
    # get the current frame
    _,frame = cap.read()
    cap.release() #releasing camera
    image = frame

I made an adaptive system as the ones the others on here posted here still resulted in somewhat inaccurate frame representation and have completely variable results depending on the hardware.我制作了一个自适应系统,因为此处发布的其他人仍然导致帧表示有些不准确,并且根据硬件的不同,结果完全不同。

from time import time
#...
cap = cv2.VideoCapture(url)
cap_fps = cap.get(cv2.CAP_PROP_FPS)
time_start = time()
time_end = time_start
while True:
   time_difference = int((((end_time-start_time))*cap_fps)+1) #Note that the 1 might be changed to fit script bandwidth
   for i in range(0, time_difference):
      a = cap.grab()
   _, frame = cap.read()
   time_start = time()
   #Put your code here
   variable = function(frame)
   #...
   time_end = time()

This way the skipped frames adapt to the amount of frames missed in the video stream - allowing for a much smoother transition and a relatively real-time frame representation.通过这种方式,跳过的帧会适应视频流中丢失的帧数量 - 允许更平滑的过渡和相对实时的帧表示。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM