简体   繁体   English

在python中的GStreamer管道中动态调整图像大小

[英]Dynamically re-sizing images in a GStreamer pipeline in python

I am trying to create a program to do various animations on different images simultaneously and one of the effects I am trying to achieve is zooming into a picture which is achieved by keeping base frame of a fixed size and image size to increase and decrease. 我正在尝试创建一个程序,以便同时对不同的图像进行各种动画处理,而我试图实现的效果之一就是放大图片,这是通过保持固定大小的基本框架以及增加和减少图片大小来实现的。 But when I try to dynamically change the size of an image it causes error I tried searching in the web but couldn't find the right solution to it. 但是,当我尝试动态更改图像的大小时,会导致错误,我尝试在网络上搜索,但找不到正确的解决方案。 Below is my code. 下面是我的代码。 Could anyone suggest me the right examples from which I can learn it will be grateful (Preferably python examples). 任何人都可以向我建议正确的示例,我将不胜感激(最好是python示例)。

#!/usr/bin/python
import gobject
import time

gobject.threads_init()
import pygst

pygst.require("0.10")
import gst

p = gst.parse_launch("""uridecodebin uri=file:///home/jango/Pictures/3.jpg name=src1 ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! capsfilter name=vfps caps="video/x-raw-yuv, framerate=60/1, width=200, height=150" ! mix.
    uridecodebin uri=file:///home/jango/Pictures/2.jpg name=src2 ! queue ! videoscale ! ffmpegcolorspace !
    imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/1.jpg name=src ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/mia_martine.jpg ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/4.jpg ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/mia_marina1.jpg ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
    videotestsrc pattern=2 ! video/x-raw-yuv, framerate=10/1, width=1024, height=768 ! videomixer name=mix sink_6::zorder=0 ! ffmpegcolorspace ! theoraenc ! oggmux name=mux !
    filesink location=1.ogg
    filesrc location=/home/jango/Music/mp3/flute_latest.mp3 ! decodebin ! audioconvert ! vorbisenc ! queue ! mux.
""")

m = p.get_by_name("mix")
s0 = m.get_pad("sink_0")
s0.set_property("zorder", 1)
q = s0.get_caps()
q.make_writable()

control11 = gst.Controller.props
control = gst.Controller(s0, "ypos", "alpha", "xpos")
control.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control.set("ypos", 0, 0)
control.set("ypos", 5 * gst.SECOND, 600)
control.set("xpos", 0, 0)
control.set("xpos", 5 * gst.SECOND, 500)
control.set("alpha", 0, 0)
control.set("alpha", 5 * gst.SECOND, 1.0)

s1 = m.get_pad("sink_1")
s1.set_property("zorder", 2)


control1 = gst.Controller(s1, "xpos", "alpha")
control1.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control1.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control1.set("xpos", 0, 0)
control1.set("xpos", 5 * gst.SECOND, 500)
control1.set("alpha", 0, 0)
control1.set("alpha", 5 * gst.SECOND, 1.0)
#

s2 = m.get_pad("sink_2")
s2.set_property("zorder", 3)

control2 = gst.Controller(s2, "ypos", "alpha", "xpos")
control2.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control2.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control2.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control2.set("xpos", 0, 0)
control2.set("xpos", 5 * gst.SECOND, 500)
control2.set("ypos", 0, 0)
control2.set("ypos", 5 * gst.SECOND, 300)
control2.set("alpha", 0, 0)
control2.set("alpha", 5 * gst.SECOND, 1.0)

s3 = m.get_pad("sink_3")
s3.set_property("zorder", 4)

control3 = gst.Controller(s3, "ypos", "alpha", "xpos")
control3.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control3.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control3.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control3.set("ypos", 0, 0)
control3.set("ypos", 5 * gst.SECOND, 600)
control3.set("xpos", 0, 0)
control3.set("xpos", 5 * gst.SECOND, 200)
control3.set("alpha", 0, 0)
control3.set("alpha", 5 * gst.SECOND, 1.0)

s4 = m.get_pad("sink_4")
s4.set_property("zorder", 5)

control4 = gst.Controller(s4, "ypos", "alpha", "xpos")
control4.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control4.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control4.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control4.set("ypos", 0, 0)
control4.set("ypos", 5 * gst.SECOND, 300)
control4.set("xpos", 0, 0)
control4.set("xpos", 5 * gst.SECOND, 200)
control4.set("alpha", 0, 0)
control4.set("alpha", 5 * gst.SECOND, 1.0)

s5 = m.get_pad("sink_5")
s5.set_property("zorder", 6)

control5 = gst.Controller(s5, "ypos", "alpha", "xpos")
control5.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control5.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control5.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control5.set("ypos", 0, 0)
control5.set("ypos", 5 * gst.SECOND, 0)
control5.set("xpos", 0, 0)
control5.set("xpos", 5 * gst.SECOND, 200)
control5.set("alpha", 0, 0)
control5.set("alpha", 5 * gst.SECOND, 1.0)

p.set_state(gst.STATE_PLAYING)
time.sleep(3)
p.set_state(gst.STATE_READY)
m = p.get_by_name("mix")
s0 = m.get_pad("sink_0")
q = s0.get_caps()
print q
if q.is_fixed():
    print "not doable"
else:
    caps = gst.caps_from_string("video/x-raw-yuv, framerate=60/1, width=1000, height=1000")
    s0.set_caps(caps)
p.set_state(gst.STATE_PLAYING)
gobject.MainLoop().run()

It will be great if anyone could show me a good learning spot for GStreamer tutorials for python developers. 如果有人能向我展示python开发人员的GStreamer教程的好地方,那将是非常不错的。

You can change image size dynamically, but for that you must have some condition 您可以动态更改图像大小,但是必须具有一些条件

Firstly your pipeline should be build something like that source ! 首先,您的管道应该构建类似该源代码的东西! videorate ! 视频率! ffvideoscale ! ffvideoscale! colorspace ! 色彩空间 ! capsfilter caps="caps" .... capsfilter caps =“ caps” ....

Secondly in python you get caps property from capsfilters element and you change the resolution in caps. 其次,在python中,您从capsfilters元素获取caps属性,并更改了caps的分辨率。

This should be working, warning if I remember you must add a gobject.timeout_add than 100 ms between resolution change. 这应该可以正常工作,如果我记得必须在两次分辨率更改之间添加一个大于100 ms的gobject.timeout_add,则会发出警告。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM