简体   繁体   English

iOS GLKit:使用相机输出设置GLKBaseEffect纹理

[英]iOS GLKit: set GLKBaseEffect texture with camera output

I have set up an AVCapture session, and in the delegate method, use the following code to try to set a GLKBaseEffect texture with the output, however all I get is black. 我已经建立了一个AVCapture会话,并在委托方法中,使用以下代码尝试通过输出设置GLKBaseEffect纹理,但是我得到的只是黑色。 What should I do to make it work? 我应该怎么做才能使其正常工作?

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

effect.texture2d0.name = CVOpenGLESTextureGetName(pixelBuffer);
}

TLDR: I don't think you can use camera output with GLKBaseEffect — data from the camera is in biplanar YUV format, so you need a custom fragment shader to convert that to RGB at render time, and GLKBaseEffect doesn't do custom shaders. TLDR:我认为您不能将相机输出与GLKBaseEffect -来自相机的数据为双平面YUV格式,因此您需要一个自定义片段着色器以在渲染时将其转换为RGB,而GLKBaseEffect不会执行自定义着色器。

It looks like there's some API confusion here, so read on if you'd like more background... 看起来这里有些API混乱,因此,如果您需要更多背景知识,请继续阅读...

Though the CVOpenGLESTexture type is derived from CVPixelBuffer , the image buffers you get back from CMSampleBufferGetImageBuffer aren't automatically OpenGL ES textures. 尽管CVOpenGLESTexture类型是从CVPixelBuffer派生的, CVPixelBuffer CMSampleBufferGetImageBuffer返回的图像缓冲区并不是自动的OpenGL ES纹理。 You need to create textures from image buffers each time you get a frame from the camera using CVOpenGLESTextureCache . 每次使用CVOpenGLESTextureCache从相机获取一帧时,都需要从图像缓冲区创建纹理。

Apple's GLCameraRipple sample code illustrates how to do this. 苹果的GLCameraRipple示例代码说明了如何执行此操作。 Here's a quick overview: 快速概览:

  1. After setting up a GL context, the setupAVCapture method calls CVOpenGLESTextureCacheCreate to create a texture cache tied to that context. 设置GL上下文后, setupAVCapture方法将调用CVOpenGLESTextureCacheCreate来创建绑定到该上下文的纹理缓存。
  2. In captureOutput:didOutputSampleBuffer:fromConnection: , they get the image buffer from the sample buffer using CMSampleBufferGetImageBuffer , then create two OpenGL ES textures from it using the texture cache's CVOpenGLESTextureCacheCreateTextureFromImage function. captureOutput:didOutputSampleBuffer:fromConnection: ,他们使用CMSampleBufferGetImageBuffer从样本缓冲区中获取图像缓冲区,然后使用纹理缓存的CVOpenGLESTextureCacheCreateTextureFromImage函数从中创建两个OpenGL ES纹理。 You need two textures because the image buffer has YUV color data in two bitplanes. 您需要两个纹理,因为图像缓冲区在两个位平面中具有YUV颜色数据。
  3. Each texture gets bound for rendering right after being created, using glBindTexture . 每个纹理在使用glBindTexture创建后就必须进行渲染。 Generally, you could replace a glBindTexture call with setting a GLKBaseEffect 's texture2d0.name and telling it to prepareToDraw , but GLKBaseEffect can't render YUV. 一般情况下,你可以更换glBindTexture与呼叫设置GLKBaseEffecttexture2d0.name ,它告诉给prepareToDraw ,但GLKBaseEffect无法呈现YUV。
  4. Draw with the two bound textures, using a shader program whose fragment shader first makes a YUV vector by combining components from the Y and UV textures, then does a simple matrix multiply to convert it to an RGB pixel color according to the HDTV color space spec. 使用着色器程序绘制两个绑定的纹理,该着色器程序的片段着色器首先通过组合来自Y和UV纹理的分量来生成YUV向量,然后进行简单的矩阵乘法以根据HDTV色彩空间规范将其转换为RGB像素颜色。 。

Because you need a custom fragment shader for the YUV to RGB conversion, GLKBaseEffect won't help you. 因为您需要用于YUV到RGB转换的自定义片段着色器,所以GLKBaseEffect无法为您提供帮助。 However, as the GLCameraRipple sample shows, the custom shaders you'd need to write aren't that scary. 但是,如GLCameraRipple示例所示,您需要编写的自定义着色器并不是那么可怕。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM