简体   繁体   中英

ffmpeg: how to get YUV data from AVframe and draw it using opengl?

How could I access the YUV data from AVframe struct? by accessing its data[]? And is there any simple way to draw YUV data using opengl instead of creating the shader and draw the Y,U,V image on their own?

Yes, you use frame->data[]. Typically everyone uses shaders/textures, so you should too. Just upload them as textures, use code like this to upload the texture:

glPixelStorei(GL_UNPACK_ROW_LENGTH, frame->linesize[0])
glActiveTexture(GL_TEXTURE0 + id);
glBindTexture(GL_TEXTURE_2D, texture[id]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, frame->width, frame->height,
             0, GL_LUMINANCE, GL_UNSIGNED_BYTE, frame->data[0]);

Upload data[1] and [2] in the same way but apply chroma subsamplings to width/height (eg uvwidth = (width + 1) >> 1). This code also assumes that you've created 3 textures in an array called texture[]. In your drawing loop, you can then refer to these texture ids to link them to uniform textures in your shaders, which you eventually use to draw. Shader examples that do YUV-to-RGB conversion can be found virtually anywhere.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM