简体   繁体   English

Gstreamer将decodebin2链接到autovideosink

[英]Gstreamer linking decodebin2 to autovideosink

I'm trying to add some processing logic to a program that chugs away on a local video file, but I'm having some trouble understanding how to translate the following (successfull) gst-launch command into code (and supply it with a "pad-added" callback): 我正在尝试将一些处理逻辑添加到一个本地视频文件中的程序,但是我在理解如何将以下(成功)gst-launch命令转换为代码时遇到了一些麻烦(并提供了一个“ pad-added“回调”:

gst-launch filesrc location=/path/to/my/video.avi ! decodebin2 ! autovideosink

I've tried my hand at gstreamer's basic-tutorial-3 , using decodebin2 in place of audioconvert : 我已经尽我的手在GStreamer中的基本教程-3 ,使用decodebin2代替audioconvert

data.source = gst_element_factory_make("filesrc", "source");
data.convert = gst_element_factory_make("decodebin2", "uridecoder");
data.sink = gst_element_factory_make("autovideosink", "autodetect");

However, I am never able to link data.convert to data.sink , as it is outlined in the example; 但是,我永远无法将data.convert链接到data.sink ,如示例中所述; the gst_element_link(data.convert, data.sink) always fails. gst_element_link(data.convert, data.sink)总是失败。 I suspect there's some special treatment for decodebin2 . 我怀疑decodebin2有一些特殊的处理decodebin2 Some gstreamer users have metioned using ghostpads and seperate bins , which after a swift attempt, also yielded no success: 一些gstreamer用户已经注意到使用了ghostpads和单独的垃圾箱 ,经过快速尝试后,也没有取得成功:

data.bin = gst_bin_new("processing-bin");
gst_bin_add_many(GST_BIN(data.bin), data.decoder, data.sink, NULL);
gst_element_add_pad(data.bin, 
                    gst_ghost_pad_new("bin_sink",
                                      gst_element_get_static_pad(data.decoder,"sink")));

I'm a little confused as how to continue debugging. 我对如何继续调试感到有点困惑。 Does anyone else have any pointers? 有没有人有任何指针?

Here is a boiled down gist of the current code: ( gist ) 以下是当前代码的简要说明 :( 要点

==== ====

Update: My callback is now firing, I think it was because I had an incorrect filename for the filesrc location (whoops) 更新:我的回调现在正在解雇,我认为这是因为我的filesrc位置的文件名不正确(哎呀)

Now, after following the advice below, I am able to confirm that I'm getting audio and video caps types and check against them in the pad-added callback. 现在,按照下面的建议,我可以确认我正在获取音频和视频上限类型,并在添加了pad的回调中检查它们。 However, I'm now getting the following "not-linked" error after one frame of pad processing: 但是,我现在在一帧填充处理后得到以下“未链接”错误:

Debugging information: gstavidemux.c(5187): gst_avi_demux_loop (): /GstPipeline:gstreamer-test/GstBin:processing-bin/GstDecodeBin2:uridecoder/GstAviDemux:avidemux0:
streaming stopped, reason not-linked

If your file contains both audio and video, your callback will be called for both audio and video pads. 如果您的文件同时包含音频和视频,则将同时为音频和视频板调用回调。 Hence in the callback you should check the caps of the pad and ensure you are trying to link only the video pad to the video sink 因此,在回调中,您应该检查打击垫的大写,并确保您尝试仅将视频打印垫链接到视频接收器

I think the main issue I was running into is that I was listening to the decoder for new pads, when I should've been getting the static pad from the video sink. 我认为我遇到的主要问题是,当我应该从视频接收器获取静态打击垫时,我正在听新的打击垫的解码器。 I've updated the gist with my callback function to illustrate the difference. 我用我的回调函数更新了要点来说明差异。

Basically, it boiled down to this change: 基本上,它归结为这种变化:

I changed 我变了

GstPad *sink_pad = gst_element_get_static_pad(data->decoder, "sink");  

to the following 以下

GstPad *sink_pad = gst_element_get_static_pad(data->sink, "sink");

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM