[英]ARGB images to gstreamer pipeline
为了加快视频中由matplotlib
生成的一系列图像的动画速度,我想将以下(工作)示例转换为gstreamer
,以便能够使用Raspberry Pi 2 GPU的硬件编码功能。
起点是下面的代码,该代码使用stdin
将ARGB视频帧传输到ffmpeg
。 我如何替换命令字符串以代替gst-launch-1.0
?
import numpy as np
import matplotlib.pylab as plt
import time
import subprocess
# Number of frames
nframes = 200
# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))
def testSubprocess(x, y):
start_time = time.time()
#set up the figure
fig = plt.figure(figsize=(15, 9))
canvas_width, canvas_height = fig.canvas.get_width_height()
# First frame
ax0 = plt.plot(x,y)
pointplot, = plt.plot(x[0], y[0], 'or')
def update(frame):
# your matplotlib code goes here
pointplot.set_data(x[frame],y[frame])
# Open an ffmpeg process
outf = 'testSubprocess.mp4'
cmdstring = ('ffmpeg',
'-y', '-r', '1', # overwrite, 1fps
'-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
'-pix_fmt', 'argb', # format
'-f', 'rawvideo', '-i', '-', # tell ffmpeg to expect raw video from the pipe
'-vcodec', 'mpeg4', outf) # output encoding
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)
# Draw frames and write to the pipe
for frame in range(nframes):
# draw the frame
update(frame)
fig.canvas.draw()
# extract the image as an ARGB string
string = fig.canvas.tostring_argb()
# write to pipe
p.stdin.write(string)
# Finish up
p.communicate()
print("Movie written in %s seconds" % (time.time()-start_time))
if __name__ == "__main__":
testSubprocess(x, y)
管道中使用的组件是omxh264enc
,但是一旦我了解了如何将数据馈送到管道中,也可以将其作为第二步来实现。 基于gst-python
的答案也是完全可以接受的。
“ videoparse”看起来就像您要在此处使用的关键内容,用来告诉GStreamer输入的格式是什么。
gst-launch-1.0 fdsrc! videoparse宽度= 128高度= 128格式= argb帧率= 5/1! 视频率! 视频转换! omxh264enc! h264parse! mp4mux! filesink location =输出文件
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.