简体   繁体   English

Android视频实时过滤

[英]Android video real time filtering

I am writing a video player where I try to apply a filter on each decoded frame before showing it on the screen. 我正在编写一个视频播放器,我尝试在每个解码帧上应用滤镜,然后在屏幕上显示它。

I use MediaCodec to extract a frame. 我使用MediaCodec来提取帧。 The frames are decoded to a Surface created from SurfaceTexture , rendered (off-screen) into a pbuffer, extracted with glReadPixels() 帧被解码为SurfaceTexture创建的SurfaceTexture ,渲染(离屏)到pbuffer,用glReadPixels()提取

I have used the ExtractMpegFramesTest as an example from this page: 我在此页面中使用了ExtractMpegFramesTest作为示例:

http://bigflake.com/mediacodec/ http://bigflake.com/mediacodec/

At this point I have ByteBuffer with the extracted pixels, on which I do some post processing (converting to grayscale , or running edge detection etc.) 在这一点上,我有ByteBuffer提取的像素,我做了一些后期处理(转换为灰度 ,或运行边缘检测等)

Having done that, I want to render the filtered frame on the screen. 完成后,我想在屏幕上渲染过滤后的帧。 I could again encode it with the MediaCodec and use a VideoView to render it, but that way each frame is encoded and decoded unnecesarryly. 我可以再次使用MediaCodec对其进行编码并使用VideoView进行渲染,但这样每个帧都会被编码和解码。

Is there an efficient way to render these frames on the screen? 有没有一种有效的方法在屏幕上渲染这些帧?

The simple answer is: upload the pixels to a GLES texture, using glTexImage2D() , and render a quad. 简单的答案是:使用glTexImage2D()将像素上传到GLES纹理,然后渲染四边形。

Depending on your filtering, you may also want to consider performing the operations entirely in GLES. 根据您的过滤,您可能还需要考虑完全在GLES中执行操作。 This is significantly faster, but a bit harder to pull off because the filters must be written in the fragment shader (GLSL). 这个速度要快得多,但有点难以实现,因为必须在片段着色器(GLSL)中写入滤波器。

You can find an example of shader-based image filtering in Grafika ( demo video here ), along with some uses of glTexImage2D() to send bitmap data to a texture. 您可以在Grafika此处为演示视频 )中找到基于着色器的图像过滤的示例 ,以及glTexImage2D()一些用途,以将位图数据发送到纹理。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM