简体   繁体   English

iPhone OpenGLES视频纹理

[英]iphone OpenGLES video texture

i know that apple offers a sample called GLCameraRipple which is using CVOpenGLESTextureCacheCreateTextureFromImage to achieve this. 我知道苹果提供了一个名为GLCameraRipple的示例,该示例正在使用CVOpenGLESTextureCacheCreateTextureFromImage来实现此目的。 But when i changed to glTexImage2D, it displays nothing, what's wrong with my code? 但是,当我更改为glTexImage2D时,它什么也不显示,我的代码有什么问题?

if (format == kCVPixelFormatType_32BGRA) {
CVPixelBufferRef pixelBuf = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuf, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuf);

glGenTextures(1, &textureID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, baseaddress);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );

CVPixelBufferUnlockBaseAddress(pixelBuf, 0);
}

thank you very much for any help! 非常感谢您的帮助!

There are a couple of problems here. 这里有两个问题。 First, the GLCameraRipple example was built to take in YUV camera data, not BGRA. 首先,构建GLCameraRipple示例以接收YUV摄像机数据,而不是BGRA。 Your above code is only uploading one texture of BGRA data, rather than the separate Y and UV planes expected by the application. 您上面的代码仅上传一个BGRA数据纹理,而不是应用程序期望的单独的Y和UV平面。 It uses a colorspace conversion shader to merge these planes as a first stage, and that needs the YUV data to work. 它使用色彩空间转换着色器将这些平面合并为第一阶段,并且需要YUV数据才能起作用。

Second, you are allocating a new texture for each uploaded frame, which is a really bad idea. 其次,您要为每个上载的帧分配新的纹理,这是一个非常糟糕的主意。 This is particularly bad if you don't delete that texture when done, because you will chew up resources this way. 如果您在完成后不删除该纹理,则尤其糟糕,因为您将以这种方式消耗资源。 You should allocate a texture once for each plane you'll upload, then keep that texture around as you upload each video frame, deleting it only when you're done processing video. 您应该为要上传的每个平面分配一次纹理,然后在上传每个视频帧时保留该纹理,仅在处理完视频后才将其删除。

You'll either need to rework the above to upload the separate Y and UV planes, or remove / rewrite their color processing shader. 您将需要重新整理上面的内容以分别上传Y和UV平面,或者删除/重写其色彩处理着色器。 If you go the BGRA route, you'll also need to be sure that the camera is now giving you BGRA frames instead of YUV ones. 如果您沿BGRA路线行驶,还需要确保相机现在为您提供BGRA镜框,而不是YUV镜框。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM