简体   繁体   English

更改 GStreamer 视频设备分辨率

[英]Change GStreamer Video Device Resolution

I am trying to use a USB 3 capture card with GStreamer, however the only exposed resolution on the capture card seems to be 1080p60 and this seems to be too much data for my Coral Dev Board to handle, convert and process through object detection in a timely manner.我正在尝试将 USB 3 采集卡与 GStreamer 一起使用,但是采集卡上唯一暴露的分辨率似乎是 1080p60,这似乎是我的 Coral 开发板无法通过对象检测处理、转换和处理的数据及时的方式。

I have used a USB 2 card at 480p30 and this works but would like something a bit higher.我使用了 480p30 的 USB 2 卡,这可以工作,但想要更高一点的东西。 I have tried two USB 3 cards, an Elgato game capture hd60 s+ and a pengo 1080p grabber, both of which seem to have the same issue.我试过两张 USB 3 卡,一张 Elgato 游戏捕捉 hd60 s+ 和一张 pengo 1080p 采集卡,两者似乎都有同样的问题。

When I use my USB 2 card, it exposes multiple different resolutions with different framerates, both in OBS on windows and when listing available formats on Linux, however, both USB3 cards only expose 1080p60.当我使用我的 USB 2 卡时,无论是在 Windows 上的 OBS 中还是在列出 Linux 上的可用格式时,它都会以不同的帧速率显示多种不同的分辨率,但是,这两个 USB3 卡都只显示 1080p60。

I get very slow, laggy inference at 1080p60, and the program crashes with any other parameters, including 1080p30, 720p60, and 720p30.在 1080p60 时,我的推理非常缓慢、滞后,并且程序因任何其他参数(包括 1080p30、720p60 和 720p30)而崩溃。 I think 720p30 would be ideal, however am unsure how to achieve this.我认为 720p30 会是理想的,但我不确定如何实现这一点。

I have been using this script to run inference on a coral dev board 4GB.我一直在使用这个脚本在 4GB 的珊瑚开发板上运行推理。 I would like to stick to python if possible.如果可能的话,我想坚持使用 python。

Warning when 1080p60 is used, as well as being slow and laggy:使用 1080p60 时的警告,以及缓慢和滞后:

Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/glsvgoverlaysink+GlSvgOverlaySink:overlaysink/GstGLImageSinkBin:glimagesinkbin0/GstGLImageSink:sink:

There may be a timestamping problem, or this computer is too slow

The gstreamer code: gstreamer 代码:

v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,framerate=60/1 ! decodebin ! glupload ! tee name=t

t. ! queue ! glfilterbin filter=glbox name=glbox ! video/x-raw,format=RGB,width=300,height=300 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true

t. ! queue ! glsvgoverlaysink name=overlaysink````

you can use cv2 ( pip install opencv-python ) to access the USB camera.您可以使用cv2 ( pip install opencv-python ) 访问 USB 摄像头。

Here is a little example of how you can get the image from the Camera and show it in a separate Window这是一个小示例,说明如何从相机获取图像并将其显示在单独的窗口中

# import the opencv library
import cv2
  
  
# define a video capture object
vid = cv2.VideoCapture(0)
  
while(True):
      
    # Capture the video frame
    # by frame
    ret, frame = vid.read()
  
    # Display the resulting frame
    cv2.imshow('frame', frame)
      
    # the 'q' button is set as the
    # quitting button you may use any
    # desired button of your choice
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
  
# After the loop release the cap object
vid.release()
# Destroy all the windows
cv2.destroyAllWindows()

Your gstreamer pipeline appears to be inefficiently written.您的 gstreamer 管道似乎效率低下。

I'd suggest to use videoscale and capsfilter at the earliest point unless you really want to use 1080p stream.除非您真的想使用 1080p 流,否则我建议尽早使用 videoscale 和 capsfilter。

Eg,例如,

v4l2src device=/dev/video1 ! videoconvert ! videoscale ! videorate ! video/x-raw,format=RGB,width=1080,height=720,framerate=30/1 ! # this will ensure you get 720@30 even if this camera doesn't support it.
# decodebin ! YOU DO NOT NEED decodebin for video/x-raw
glupload ! tee name=t

t. ! queue ! glfilterbin filter=glbox name=glbox ! video/x-raw,format=RGB,width=300,height=300 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true

t. ! queue ! glsvgoverlaysink name=overlaysink````

Generally, such GStreamer off-the-shelf filters are faster and more efficient (lower memory) than python-cv (or even C++-CV).通常,这种 GStreamer 现成的过滤器比 python-cv(甚至 C++-CV)更快、更高效(内存更少)。

Besides, for such queues with high-latency filters, consider limiting the queue size and making them leaky (I use leaky=2 for real-time inferences).此外,对于具有高延迟过滤器的此类队列,请考虑限制队列大小并使其泄漏(我使用leaky=2 进行实时推理)。

Another suggestion: you may use nnstreamer instead of cv filters if what you do is running tensorflow-lite models with edge-tpu unless CV interfaces provide higher efficiency than invoking tensorflow-lite delegations.另一个建议:如果您所做的是使用 edge-tpu 运行 tensorflow-lite 模型,则可以使用 nnstreamer 而不是 cv 过滤器,除非 CV 接口提供比调用 tensorflow-lite 委托更高的效率。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM