简体   繁体   English

FFmpeg-在多个通话中将帧添加到MP4

[英]FFmpeg - Add frames to MP4 in multiple calls

I'm trying to use FFmpeg to create an MP4 file from user-created content in an Android app. 我正在尝试使用FFmpeg从Android应用程序中用户创建的内容创建MP4文件。

The user-created content is rendered with OpenGL. 用户创建的内容使用OpenGL渲染。 I was thinking I can render it to a FrameBuffer and save it as a temporary image file. 我当时想我可以将其渲染到FrameBuffer并将其另存为临时图像文件。 Then, take this image file and add it to an MP4 file with FFmpeg. 然后,获取此图像文件,并使用FFmpeg将其添加到MP4文件中。 I'd do this frame by frame, creating and deleting these temporary image files as I go while I build the MP4 file. 我将逐帧执行此操作,并在构建MP4文件时创建和删除这些临时图像文件。

The issue is I will never have all of these image files at one time , so I can't just use the typical call: 问题是我永远不会一次拥有所有这些图像文件 ,所以我不能只使用典型的调用:

ffmpeg -start_number n -i test_%d.jpg -vcodec mpeg4 test.mp4

Is this possible with FFmpeg? FFmpeg有可能吗? I can't find any information about adding frames to an MP4 file one-by-one and keeping the correct framerate, etc... 我找不到有关将帧一一添加到MP4文件并保持正确帧率的任何信息,等等。

Use STDIO to get the raw frames to FFmpeg. 使用STDIO将原始帧获取到FFmpeg。 Note that this doesn't mean exporting entire images... all you need is the pixel data. 请注意,这并不意味着导出整个图像……您需要的只是像素数据。 Something like this... 像这样

ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt rgb24 -r 30 -i - -vcodec mpeg4 test.mp4

Using -i - means FFmpeg will read from the pipe. 使用-i -表示FFmpeg将从管道读取。

I think from there you would just send in the raw pixel values via the pipe, one byte per color per pixel. 认为从那里您只需通过管道发送原始像素值,每种颜色每个像素一个字节。 FFmpeg will know when you're done with each frame since you've passed the frame size to it. FFmpeg会知道每帧的完成时间,因为您已经将帧大小传递给它了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM