简体   繁体   English

ffmpeg:如何从AVframe获取YUV数据并使用opengl进行绘制?

[英]ffmpeg: how to get YUV data from AVframe and draw it using opengl?

How could I access the YUV data from AVframe struct? 如何从AVframe结构访问YUV数据? by accessing its data[]? 通过访问其数据[]? And is there any simple way to draw YUV data using opengl instead of creating the shader and draw the Y,U,V image on their own? 有没有一种简单的方法可以使用opengl绘制YUV数据而不是创建着色器并自己绘制Y,U,V图像?

Yes, you use frame->data[]. 是的,您使用frame-> data []。 Typically everyone uses shaders/textures, so you should too. 通常每个人都使用着色器/纹理,因此您也应该这样做。 Just upload them as textures, use code like this to upload the texture: 只需将它们作为纹理上传,使用如下代码上传纹理:

glPixelStorei(GL_UNPACK_ROW_LENGTH, frame->linesize[0])
glActiveTexture(GL_TEXTURE0 + id);
glBindTexture(GL_TEXTURE_2D, texture[id]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, frame->width, frame->height,
             0, GL_LUMINANCE, GL_UNSIGNED_BYTE, frame->data[0]);

Upload data[1] and [2] in the same way but apply chroma subsamplings to width/height (eg uvwidth = (width + 1) >> 1). 以相同的方式上传数据[1]和[2],但将色度子采样应用于宽度/高度(例如uvwidth =(width + 1)>> 1)。 This code also assumes that you've created 3 textures in an array called texture[]. 此代码还假定您已在名为texture []的数组中创建了3个纹理。 In your drawing loop, you can then refer to these texture ids to link them to uniform textures in your shaders, which you eventually use to draw. 然后,在绘图循环中,您可以引用这些纹理ID,以将它们链接到着色器中的均匀纹理,然后将它们最终用于绘制。 Shader examples that do YUV-to-RGB conversion can be found virtually anywhere. 几乎可以在任何地方找到进行YUV到RGB转换的着色器示例。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM