简体   繁体   English

Gstreamer stream 不适用于 OpenCV

[英]Gstreamer stream is not working with OpenCV

I want to use a Gstreamer pipeline directly with OpenCV to manage the image acquisition from a camera.我想直接将 Gstreamer 管道与 OpenCV 一起使用来管理来自相机的图像采集。 Currently I don't have the camera so I've been experimenting getting the video from URIs and local files.目前我没有相机,所以我一直在尝试从 URI 和本地文件中获取视频。 i'm using a Jetson AGX Xavier with L4T (ubuntu 18.04), my OpenCV build includes Gstreamer and both libraries seem to work fine independently.我正在使用带有 L4T(ubuntu 18.04)的 Jetson AGX Xavier,我的 OpenCV 构建包括 Gstreamer,这两个库似乎都可以独立工作。

The issue I've encountered is that when I pass the string defining the pipeline to the VideoCapture class with the cv2.CAP_GSTREAMER, I receive some warnings like these:我遇到的问题是,当我使用 cv2.CAP_GSTREAMER 将定义管道的字符串传递给 VideoCapture class 时,我收到一些警告,如下所示:

[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (854) open OpenCV | [警告:0] 全局 /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (854) 打开 OpenCV | GStreamer warning: Error opening bin: could not link playbin0 to whatever sink I've defined GStreamer 警告:打开 bin 时出错:无法将 playbin0 链接到我定义的任何接收器

[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | [警告:0] 全局 /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created GStreamer 警告:GStreamer:尚未创建管道

I've tried several options, you can see them in the next code:我尝试了几个选项,您可以在下一个代码中看到它们:

bool receiver(const char* context)
{   
    VideoCapture cap(context, CAP_GSTREAMER);
    int fail = 0;

    while(!cap.isOpened())
    {
        cout<<"VideoCapture not opened"<<endl;
        fail ++;
        if (fail > 10){
                return false;
            }
        continue;
    }

    Mat frame;
    while(true) {

        cap.read(frame);

        if(frame.empty())
            return true;

        imshow("Receiver", frame);
        if(waitKey(1) == 'r')
            return false;
    }
    destroyWindow("Receiver");
    return true;
}

int main(int argc, char *argv[])
{
    GstElement *pipeline;
    const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false"; //Command for the camera that I don't have yet
    const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
    const char* thermal_context = "playbin uri=file:///home/nvidia/repos/vidtest/thermalVideo.avi ! appsink name=thermalsink";
    const char* local_context = "playbin uri=file:///home/nvidia/repos/flir/Video.avi";
    
    // gst_init(&argc, &argv);
    // pipeline = gst_parse_launch(test_context, NULL);
    bool correct_execution = receiver(thermal_context);
    if(correct_execution){
        cout << "openCV - gstreamer works!" << endl;
    } else {
        cout << "openCV - gstreamer FAILED" << endl;
    }
}

For the commands I've tested, the error isPipelinePlaying OpenCV |对于我测试过的命令,错误是PipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created is persistent, if I don't define an AppSink the error shown above is changed for open OpenCV | GStreamer 警告:GStreamer:管道尚未创建是持久的,如果我没有定义 AppSink,上面显示的错误会更改为打开 OpenCV | GStreamer warning: cannot find appsink in manual pipeline . GStreamer 警告:在手动管道中找不到应用程序 From the warnings I can understand that the pipeline is incomplete or is not created properly but I don't know why, I've followed the examples I've found online and they don't include any other steps.从警告中我可以理解管道不完整或未正确创建但我不知道为什么,我遵循了我在网上找到的示例,它们不包含任何其他步骤。

Also, when using directly the Gstreamer pipeline to visualize the stream, when I try to open a local video, everything seems to work fine but the first frame is frozen and doesn't show the video, it just stays in the first frame.此外,当直接使用 Gstreamer 管道可视化 stream 时,当我尝试打开本地视频时,一切似乎都正常,但第一帧被冻结并且不显示视频,它只是停留在第一帧。 Do you know why would that happen?你知道为什么会这样吗? with playbin uri pointing to an internet address everything works well... The code is the next:使用 playbin uri 指向一个互联网地址,一切正常......代码是下一个:

#include <gst/gst.h>
#include <unistd.h> // for sleep function
#include <iostream>

using namespace std;

int main (int argc, char *argv[])
{
    GstElement *pipeline;
    GstBus *bus;
    GstMessage *msg;
    
    const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false";
    const char* local_context = "gst-launch-1.0 -v playbin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi";
    const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
    // Initialize gstreamer
    gst_init (&argc, &argv);

    // Create C pipeline from terminal command (context)
    pipeline = gst_parse_launch(local_context, NULL);
        
    // Start the pipeline
    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    // Wait until error or EOS
    bus = gst_element_get_bus (pipeline);

    gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, (GstMessageType)(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));

    /* Free resources */

    if (msg != NULL)

        gst_message_unref (msg);
        // g_print(msg);

    gst_object_unref (bus);

    gst_element_set_state (pipeline, GST_STATE_NULL);

    gst_object_unref (pipeline);
}

For using gstreamer backend, opencv VideoCapture expects a valid pipeline string from your source to appsink (BGR format for color).对于使用 gstreamer 后端,opencv VideoCapture 需要从您的源到应用程序接收器的有效管道字符串(颜色的 BGR 格式)。

Your pipeline strings are not correct mainly because they start with the binary command (gstlaunch for gst-launch-1.0, playbin) that you would use in a shell for running these.您的管道字符串不正确主要是因为它们以二进制命令(gstlaunch for gst-launch-1.0,playbin)开头,您将在 shell 中使用该命令来运行这些命令。

You may try instead this pipeline for reading from RTP/UDP an H264-encoded video, decoding with dedicated HW NVDEC, then copying from NVMM memory into system memory while converting into BGRx format, then using CPU-based videoconvert for BGR format as expected by opencv appsink:您可以尝试使用此管道从 RTP/UDP 读取 H264 编码的视频,使用专用 HW NVDEC 解码,然后从 NVMM memory 复制到系统 memory 中,同时转换为 BGRx 格式,然后按预期使用基于 CPU 的视频转换为 BGR 格式opencv 应用程序:

    const char* context = "udpsrc port=5000 caps=application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";

Or for uridecodebin the ouput may be in NVMM memory if a NV decoder has been selected, or in system memory otherwise, so the following nvvidconv instance is first copying to NVMM memory, then the second nvvidconv converts into BGRx with HW and outputs into system memory: Or for uridecodebin the ouput may be in NVMM memory if a NV decoder has been selected, or in system memory otherwise, so the following nvvidconv instance is first copying to NVMM memory, then the second nvvidconv converts into BGRx with HW and outputs into system memory :

    const char* local_context = "uridecodebin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";

Note for high resolutions that:注意高分辨率:

  • CPU-based videoconvert may be a bottleneck.基于 CPU 的视频转换可能是一个瓶颈。 Enable all cores and boost the clocks.启用所有内核并提升时钟。
  • OpenCv imshow may not be that fast depending on your OpenCv build's graphical backend (GTK, QT4, QT5..). OpenCv imshow 可能不会那么快,具体取决于您的 OpenCv 构建的图形后端(GTK、QT4、QT5..)。 In such case a solution is to use an OpenCv Videowriter using gstreamer backend to output to a gstreamer video sink.在这种情况下,解决方案是使用 OpenCv Videowriter 使用 gstreamer 后端到 output 到 gstreamer 视频接收器。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM