简体   繁体   English

使用MediaCodec和Surface进行Android编码

[英]Android encoding using MediaCodec and a Surface

I've been rendering video through the MediaCodec directly to a Surface that was taken from a SurfaceView in my UI. 我一直在通过MediaCodec将视频直接渲染到我的UI中从SurfaceView获取的Surface。 This works great. 这非常有效。

I am now attempting to use MediaCodec as an encoder. 我现在正在尝试使用MediaCodec作为编码器。 As a test, I want to render to the Surface (as in above) and loopback through a different instance of MediaCodec configured as an encoder. 作为测试,我想渲染到Surface(如上所述)并通过配置为编码器的MediaCodec的不同实例进行环回。

I see the createInputSurface() method of the encoder. 我看到编码器的createInputSurface()方法。 I think I want the encoder to create this surface and then have the decoder MediaCodec use this as the surface to draw to. 我想我希望编码器创建这个表面然后让解码器MediaCodec使用它作为要绘制的表面。 First off, is this possible? 首先,这可能吗?

Secondly, I'm not sure how to create a SurfaceView from the Surface that the encoder creates. 其次,我不确定如何从编码器创建的Surface创建SurfaceView。 I've only extracted a Surface from a SurfaceView and I don't see, from the docs, how to do this in reverse. 我只从SurfaceView中提取了一个Surface,而我从文档中看不到如何反向执行此操作。

Surfaces are the "producer" side of a producer-consumer arrangement. 表面是生产者 - 消费者安排的“生产者”方面。 Generally speaking, the API is centered around consumers, which create both ends and then hand the producer interface (the Surface) back to you. 一般来说,API以消费者为中心,创建两端,然后将生产者界面(Surface)交给您。

So for a SurfaceView or a MediaCodec encoder, you create the object, and get its Surface. 因此,对于SurfaceView或MediaCodec编码器,您可以创建对象,并获取其Surface。 Then you send buffers of graphics data to them, with Canvas, OpenGL ES, or a MediaCodec decoder. 然后使用Canvas,OpenGL ES或MediaCodec解码器向其发送图形数据缓冲区。

There is no way to take the encoder's input Surface and use it as the SurfaceView's display Surface -- they're two different pipelines. 无法获取编码器的输入Surface并将其用作SurfaceView的显示Surface - 它们是两个不同的管道。 The SurfaceView's consumer is in the system compositor (SurfaceFlinger), which is why you have to wait for the "surface created" callback to fire. SurfaceView的使用者位于系统合成器(SurfaceFlinger)中,这就是为什么你必须等待“表面创建”回调才能触发的原因。 The MediaCodec encoder's consumer is in the mediaserver process, though the asynchronicity is better concealed. MediaCodec编码器的消费者处于mediaserver过程中,尽管隐藏了异步性。

Sending the MediaCodec decoder output to a SurfaceView is straightforward, as is sending the output to a MediaCodec encoder. 将MediaCodec解码器输出发送到SurfaceView非常简单,将输出发送到MediaCodec编码器也是如此。 As you surmised, just pass the encoder's input Surface to the decoder. 如您所推测的那样,只需将编码器的输入Surface传递给解码器即可。 Where life gets interesting is when you want to do both of those things at the same time. 生活变得有趣的地方就是你想要同时做这两件事。

The code underlying Surface (called BufferQueue) should be capable (as of Lollipop) of multiplexing, but I'm not aware of an API in Lollipop that exposes the capability to applications. Surface下面的代码(称为BufferQueue)应该能够(从Lollipop开始)进行多路复用,但我不知道Lollipop中的API会向应用程序公开该功能。 Which means you're stuck doing things the hard way. 这意味着你很难坚持做事。

The hard way involves creating a SurfaceTexture (a/k/a GLConsumer), which is the consumer end of the pipe. 困难的方法是创建一个SurfaceTexture(a / k / a GLConsumer),它是管道的消费者端。 From that you can create a Surface, using the sole constructor . 从那里你可以使用唯一的构造函数创建一个Surface。 You hand that to the MediaCodec decoder. 你把它交给MediaCodec解码器。 Now every frame that comes out will be converted to a GLES texture by SurfaceTexture. 现在,每个出来的帧都将被SurfaceTexture转换为GLES纹理。 You can render those to the SurfaceView and the encoder's input Surface. 您可以将它们渲染到SurfaceView和编码器的输入Surface。

You can find various examples in Grafika , and a longer explanation of the mechanics in the graphics architecture doc . 您可以在Grafika中找到各种示例, 并对 图形架构文档中的机制进行更长时间的解释。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM