简体   繁体   English

如何在Android上将相机帧编码为mp4

[英]How to encode camera frames into mp4 on android

I take camera preview frames from android camera in 640x480 (sufficient to me) and do some modifications over them. 我从640x480的 Android相机中取出相机预览帧(对我来说足够了)并对它们进行一些修改。 But now I need to encode those to new MP4 file (with audio). 但现在我需要将它们编码为新的MP4文件(带音频)。

Is this some how possible? 这有些可能吗? I can't use ffmpeg due to its not so good license, but I've found Stagefright framework which should be probably capable of doing that. 我不能使用ffmpeg因为它没有那么好的许可证,但是我发现了Stagefright框架应该可以做到这一点。 But I did not find any sort of official documentation or tutorials to do such a thing I need to do. 但我没有找到任何官方文档或教程来做我需要做的事情。

Is there a way to do it with this framework please? 有没有办法用这个框架来做呢? I don't need codes, I would be very glad just for pointing me the right direction. 我不需要代码,只要指出正确的方向,我会很高兴。

There is one scenario where the use-case described is realized. 有一种情况可以实现所描述的用例。 Consider a scenario where the Camera output is fed to an OpenGL library where some effects are applied on the preview frames which need to be recorded. 考虑将Camera输出馈送到OpenGL库的情况,其中一些效果应用于需要记录的预览帧。

Well in this case, you can use the traditional MediaRecorder with GrallocSource instead of CameraSource . 那么在这种情况下,您可以使用传统的MediaRecorderGrallocSource而不是CameraSource The setup would look like thus: 设置看起来像这样:

MediaRecorder is set up with the GrallocSource . MediaRecorder设置与GrallocSource The input surfaces for recording are provided by the Camera + OpenGL combined operation which implement a SurfaceTextureClient . 用于记录的输入表面由Camera + OpenGL组合操作提供,其实现SurfaceTextureClient A good example for this can be found in SurfaceMediaSource_test modules . SurfaceMediaSource_test模块中可以找到一个很好的例子。

stagefright is quite good if you must support API 9 and higher. 如果你必须支持API 9及更高版本, stagefright是相当不错的。 But this framework is not official, as you saw. 但正如你所看到的,这个框架不是正式的。 You can use the sample code in platform/frameworks/av at your risk. 您可以使用platform / frameworks / av中的示例代码,风险自负。

The google source includes CameraSource, which provides the camera frames directly to the encoder. 谷歌源包括CameraSource,它直接向编码器提供相机帧。 While this approach may be much more efficient (the pixels are not copied to the user space at all), it does not allow manipulation. 虽然这种方法可能更有效(像素根本不会复制到用户空间),但它不允许操作。 It is possible to modify the C++ source, but I strongly recommend to access the Camera in Java, and pass the preview frames via JNI to stagefrght (OpenMAX) encoder. 可以修改C ​​++源代码,但强烈建议您使用Java访问Camera,并将预览帧通过JNI传递给stagefrght(OpenMAX)编码器。 On some devices, this may force you to use software encoder. 在某些设备上,这可能会强制您使用软件编码器。 You must convert the video frames to YUV planar format for the encoder. 您必须将视频帧转换为编码器的YUV平面格式。 See libyuv for optimized converters. 有关优化的转换器,请参阅libyuv

If you can restrict your support to API 16 and higher, it is safer to use the official Java MediaCdec API . 如果您可以限制对API 16及更高版本的支持,则使用官方Java MediaCdec API会更安全。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM