简体   繁体   English

从捕获的视频帧创建openGL纹理以覆盖视频上的openGL视图的替代方法? (苹果手机)

[英]Alternatives to creating an openGL texture from a captured video frame to overlay an openGL view over video? (iPhone)

This is mostly relevant for augmented reality type applications. 这主要与增强现实类型应用相关。 Apple provides information on how to capture video frames (and save them as images if need be) with AVCaptureSession here: Apple提供了有关如何使用AVCaptureSession捕获视频帧(并在需要时将其另存为图像)的信息:

http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html

I know that it is possible to create an openGL texture out a captured video frame and then use that as a background in the openGL view over which to overlay other graphics. 我知道可以在捕获的视频帧中创建一个openGL纹理,然后将其用作openGL视图中的背景,覆盖其他图形。

I am wondering if there are any alternatives to this method? 我想知道这种方法是否有其他选择? The method mentioned above may be the best (I don't know if it is) but if there are alternatives to try it would be good to know. 上面提到的方法可能是最好的(我不知道是不是),但是如果有其他方法可以尝试,那将是很好的。 For example, is there a way to overlay the openGL view directly over the AVCaptureVideoPreviewLayer? 例如,有没有办法直接在AVCaptureVideoPreviewLayer上覆盖openGL视图?

You can indeed layer OpenGL content over something like AVCaptureVideoPreviewLayer, but your performance will suffer. 您确实可以通过AVCaptureVideoPreviewLayer等方式对OpenGL内容进行分层,但您的性能会受到影响。 Apple highly recommends that you not overlay non-opaque OpenGL ES content on top of other display elements. Apple强烈建议您不要将非不透明的OpenGL ES内容叠加在其他显示元素之上。 From the OpenGL ES Programming Guide for iOS : 适用于iOSOpenGL ES编程指南

For the absolute best performance, your application should rely solely on OpenGL ES to render your content. 为了获得绝对最佳性能,您的应用程序应仅依靠OpenGL ES来呈现您的内容。 To do this, size the view that holds your CAEAGLLayer object to match the screen, set its opaque property to YES, and ensure that no other Core Animation layers or views are visible. 为此,请调整包含CAEAGLLayer对象以匹配屏幕的视图,将其opaque属性设置为YES,并确保没有其他Core Animation图层或视图可见。 If your OpenGL ES layer is composited on top of other layers, making your CAEAGLLayer object opaque reduces but doesn't eliminate the performance cost. 如果您的OpenGL ES图层在其他图层之上合成,则使您的CAEAGLLayer对象不透明会降低但不会消除性能成本。

If your CAEAGLLayer object is blended on top of layers underneath it in the layer hierarchy, the renderbuffer's color data must be in a premultiplied alpha format to be composited correctly by Core Animation. 如果您的CAEAGLLayer对象在图层层次结构中混合在其下面的图层之上,则渲染缓冲区的颜色数据必须采用预乘Alpha格式才能由Core Animation正确合成。 Blending OpenGL ES content on top of other content has a severe performance penalty. 将OpenGL ES内容混合到其他内容之上会严重影响性能。

Honestly, it really isn't that hard to pull in the video as a texture and then display that as a billboard behind your 3-D overlay. 老实说,将视频作为纹理拉入并将其显示为3-D叠加层背后的广告牌并不难。 My sample application here does passthrough of camera video to an OpenGL ES (2.0) texture for display to the screen. 我的示例应用程序此处确实相机视频的直通到一个OpenGL ES(2.0)纹理显示在屏幕上。 With only a few modifications, you could place 3-D content on top of that. 只需进行一些修改,您就可以在其上放置3-D内容。 This will give you far better performance than trying to draw non-opaque 3-D content on top of an AVCaptureVideoPreviewLayer. 这将比尝试在AVCaptureVideoPreviewLayer上绘制非不透明的3-D内容提供更好的性能。

However, if you are just wanting to display simple static UIViews over OpenGL ES content, that can be done without much of a performance penalty (~5% reduction in framerate in my experience). 但是,如果您只想在OpenGL ES内容上显示简单的静态UIViews,那么可以在没有太多性能损失的情况下完成(根据我的经验,帧速率降低约5%)。

Sure, views can be layered together, regardless of content. 当然,无论内容如何,​​视图都可以分层。 Layering GL over Video is no different than layering 2D over 2D. 在视频上分层GL与在2D上分层2D没什么不同。

Just about the only catch is that you need to render your GL content so that the image produced is premultiplied by alpha (just like all other transparent content on iOS is premultiplied). 几乎唯一的问题是你需要渲染你的GL内容,以便生成的图像被alpha预乘(就像iOS上的所有其他透明内容一样被预乘)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM