简体   繁体   English

Gstreamer在EOS之前动态更改源

[英]Gstreamer dynamically change source before EOS

I'm trying to create a dynamic pipeline with GStreamer 1.8.2 and Python 3.5. 我正在尝试使用GStreamer 1.8.2和Python 3.5创建一个动态管道。 The goal is to be able to playback some video and change it at EOS, achieving gapless playback in a way similar to using about-to-finish of playbin. 目的是能够播放一些视频并在EOS上对其进行更改,以类似于使用即将完成的播放器的方式实现无缝播放。

My idea is filesrc -> decodebin -> queue -> videosink, then place a probe on decodebin video src, wait for EOS event, unlink filesrc and decodebin, create a new filesrc and a new decodebin ad link them to the video sink after setting them to PLAYING status. 我的想法是filesrc-> encodebin->队列-> videosink,然后将一个探测器放在解码bin视频src上,等待EOS事件,取消链接f​​ilesrc和encodebin,创建一个新的filesrc和一个新的definebin广告,将它们设置后链接到视频接收器他们进入PLAYING状态。 I don't know if this is the best/correct approach but to my knowledge it should work. 我不知道这是否是最佳/正确的方法,但据我所知它应该有效。

My first attempt is this . 我的第一次尝试是这样 The playback works fine for the first loop, then the video starts to play too fast. 播放在第一个循环中效果很好,然后视频开始播放得太快。 I think that there's some problem regarding timestamps and/or pipeline clocks, but I wasn't able to find a solution or to better diagnose the problem. 我认为时间戳和/或流水线时钟存在一些问题,但是我无法找到解决方案或更好地诊断问题。

EDIT: setting max-lateness to -1 in vaapisink, the playback it still faster but a lot less faster. 编辑:在vaapisink中将max-lateness设置为-1,其回放速度仍然更快,但速度却慢得多。 So IT IS a timing problem. 因此,这是一个时间问题。

Well what can I tell you - use concat or videomixer/audiomixer (I prefer concat way) .. you do not need any custom solutions :) 好吧,我能告诉您-使用concat或videomixer / audiomixer(我更喜欢concat方式)..您不需要任何自定义解决方案:)

Concat does exactly what you want it switches to another source upon EOS of current source.. Here is nice example. Concat确实可以按照您希望的方式在当前源的EOS上切换到另一个源。 是一个很好的示例。

Its little more tricky with multiple streams (audio, video, subtitles .. ) you then need to incorporate stream sychroniser or something similar as said here .. 如果要处理多个流(音频,视频,字幕等),则比较棘手,那么您需要合并流同步器或类似此处所述的内容。

Also check this answer there is already an example on concat.. but also read the comments. 还检查此答案 ,已经有关于concat的示例了。但是也请阅读注释。

UPDATE reagarding the manual way: 更新以手动方式进行:

Using videomixer and audiomixer its little bit tricky. 使用videomixer和audiomixer有点棘手。

Lets think about the video part.. 让我们考虑视频部分。

You will create bin for stream you want to play(the first one) - lets say you have uridecodebin there which will preroll the whole thing and create pads.. when you find out the new pad is of video/x-raw you will add pad probe there and plug to videomixer. 您将为要播放的流(第一个)创建bin-假设您在那里有uridecodebin,它将预滚动整个内容并创建pads ..当您发现新的pads是video / x-raw时,您将添加将探头垫在那里,然后插入Videomixer。

Then some time later (when possible or so) you will create another bin with another uridecodebin (so this is the second "track" in your hypothetic playlist) and again do preroll of this. 然后一段时间(如果可能的话),您将使用另一个uridecodebin创建另一个bin(因此这是您的假设播放列表中的第二个“音轨”),并再次对此进行预滚动。 When you get the pads you do not connect them to videomixer but block the whole thing (I think PAUSED is suitable enough) 当您获得打击垫时,您无需将其连接至Videomixer而是将整个物体挡住(我认为PAUSED足够合适)

When the first one goes EOS then you will enable the second one and in the first one you will flush the rest of the video. 当第一个进入EOS时,您将启用第二个,而在第一个中,您将刷新视频的其余部分。

The same thing you will do with audio of course.. 当然,您将对音频做同样的事情。

Now the tricky part - you have to align the video against audio (the audio is more presize, you compare timestamps until the audio matches the video and you throw away the rest of the audio or so) - this is needed in order to not go out of sync. 现在最棘手的部分-您必须将视频与音频对齐(音频的大小更大,您需要比较时间戳,直到音频与视频匹配,然后丢弃其余的音频等等)-为了避免这种情况,这是必需的不同步。

This approach is very hard to do.. I did it once and we had infinite problems of synchronisation of audio and video. 这种方法很难做到。我曾经做过一次,我们在音频和视频同步方面存在无限的问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM