简体   繁体   English

MediaCodec同时编码和解码

[英]MediaCodec simultaneous encoding and decoding

I am trying to apply effects to the frames of a video using the GPU and then to re-encode those frames into a new result video. 我正在尝试使用GPU将效果应用于视频帧,然后将这些帧重新编码为新的结果视频。

In the interest of performance I have implemented the following flow: 为了表现,我实施了以下流程:

There are 3 different threads, each with it's own OpenGL context. 有3个不同的线程,每个线程都有自己的OpenGL上下文。 These contexts are set up in such a way that they share textures between them. 这些上下文的设置方式使它们在它们之间共享纹理。

Thread 1 extracts frames from the video and holds them in the GPU memory as textures, similar to this example. 线程1从视频中提取帧并将其作为纹理保存在GPU内存中,类似于示例。

Thread 2 processes the textures using a modified version of GPUImage that also outputs textures in the GPU memory. 线程2使用修改版本的GPUImage处理纹理,该版本还在GPU内存中输出纹理。

Finally, thread 3 writes the textures obtained from thread 2 into a new video file similar to the method described here 最后, 线程3将从线程2获得的纹理写入类似于此处描述的方法的新视频文件中

Frame order is maintained using queues between threads 1 and 2, and threads 2 and 3. Textures are deleted from memory manually after they are used for processing / writing. 使用线程1和线程2以及线程2和3之间的队列维护帧顺序。纹理用于处理/写入之后手动从存储器中删除。

The whole point of this flow is to separate each process in hopes that the final performance will be that of the slowest of the 3 threads. 这个流程的重点是将每个进程分开,希望最终的性能是3个线程中最慢的。

THE PROBLEM: 问题:

The final video is 90% black frames, only some of them being correct. 最终的视频是90%的黑色帧,其中只有一些是正确的。

I have checked the individual results of extraction, and processing and they work as expected. 我检查了提取和处理的各个结果,它们按预期工作。 Also note that the 3 components described in the 3 threads work just fine together in a single thread. 另请注意,3个线程中描述的3个组件在单个线程中可以很好地协同工作。

I have tried to synchronise thread 1 and thread 3, and after adding an extra 100ms sleep time to thread 1 the video turns out just fine, with maybe 1 or 2 black frames. 我试图同步线程1和线程3,并在为线程1添加额外的100ms休眠时间后,视频结果很好,可能有1或2个黑帧。 Seems to me like the two instance of the decoder and encoder are unable to work simultaneously. 在我看来,解码器和编码器的两个实例无法同时工作。

I will edit this post with any extra requested details. 我将使用任何额外请求的详细信息编辑此帖子。

Sharing textures between OpenGL ES contexts requires some care. 在OpenGL ES上下文之间共享纹理需要一些小心。 The way it's implemented in Grafika's "show + capture camera" Activity is broken; 它在Grafika的 “show + capture camera”活动中实施的方式被打破了; see this issue for details. 有关详细信息,请参阅此问 The basic problem is that you essentially need to issue memory barriers when the texture is updated; 基本问题是你在更新纹理时基本上需要发出内存障碍; in practical terms that means issuing glFinish() on the producer side and, and re-binding the texture on the consumer side, and doing all of this in synchronized blocks. 实际上,这意味着在生产者方面发布glFinish() ,并在消费者方面重新绑定纹理,并在synchronized块中执行所有这些操作。

Your life will be simpler (and more efficient) if you can do all of the GLES work on a single thread. 如果您可以在单个线程上完成所有GLES工作,那么您的生活将变得更简单(也更高效)。 In my experience, having more than one GLES context active at a time is unwise, and you'll save yourself some pain by finding an alternative. 根据我的经验,一次有多个GLES上下文是不明智的,你可以通过寻找替代方案来省去一些痛苦。

You probably want something more like this: 你可能想要更像这样的东西:

  • Thread #1 reads the file and feeds frames into a MediaCodec decoder. 线程#1读取文件并将帧馈送到MediaCodec解码器。 The decoder sends the output to a SurfaceTexture Surface. 解码器将输出发送到SurfaceTexture Surface。
  • Thread #2 has the GLES context. 线程#2具有GLES上下文。 It created the SurfaceTexture that thread #1 is sending the output to. 它创建了线程#1将输出发送到的SurfaceTexture。 It processes the images and renders the output on the Surface of a MediaCodec encoder. 它处理图像并在MediaCodec编码器的Surface上呈现输出。
  • Thread #3, which created the MediaCodec encoder, sits waiting for the encoded output. 创建MediaCodec编码器的线程#3等待编码输出。 As output is received it's written to disk. 收到输出后,它会写入磁盘。 Note that the use of MediaMuxer can stall; 请注意,MediaMuxer的使用可能会停滞; see this blog post for more. 有关更多信息 ,请参阅此博文

In all cases, the only communication between threads (and, under the hood, processes) is done through Surface. 在所有情况下,线程之间(以及引擎盖下的进程)之间的唯一通信是通过Surface完成的。 The SurfaceTexture and MediaCodec instances are created and used from a single thread; SurfaceTexture和MediaCodec实例是从单个线程创建和使用的; only the producer endpoint (the Surface) is passed around. 只传递生产者端点(Surface)。

One potential trouble point is with flow control -- SurfaceTextures will drop frames if you feed them too quickly. 一个潜在的问题在于流量控制 - 如果您过快地提供帧,SurfaceTextures将丢弃帧。 Combining threads #1 and #2 might make sense depending on circumstances. 根据情况,组合线程#1和#2可能有意义。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM