简体   繁体   English

在iOS上显示CVImageBufferRef的最有效方法是什么

[英]What is the most efficient way to display CVImageBufferRef on iOS

I have CMSampleBufferRef(s) which I decode using VTDecompressionSessionDecodeFrame which results in CVImageBufferRef after decoding of a frame has completed, so my questions is.. 我有CMSampleBufferRef(s),我使用VTDecompressionSessionDecodeFrame进行解码,这在完成帧解码后会导致CVImageBufferRef,所以我的问题是。

What would be the most efficient way to display these CVImageBufferRefs in UIView? 在UIView中显示这些CVImageBufferRefs的最有效方法是什么?

I have succeeded in converting CVImageBufferRef to CGImageRef and displaying those by settings CGImageRef as CALayer's content but then DecompressionSession has been configured with @{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] }; 我已经成功地将CVImageBufferRef转换为CGImageRef并通过将CGImageRef设置为CALayer的内容进行显示,但是DecompressionSession已通过@ {(id)kCVPixelBufferPixelFormatTypeKey:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]进行了配置。

Here is example/code how I've converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data has to be in 32BGRA format for this to work) 这是示例/代码,我如何将CVImageBufferRef转换为CGImageRef(注意:cvpixelbuffer数据必须为32BGRA格式才能正常工作)

    CVPixelBufferLockBaseAddress(cvImageBuffer,0);
    // get image properties 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer);
    size_t bytesPerRow   = CVPixelBufferGetBytesPerRow(cvImageBuffer);
    size_t width         = CVPixelBufferGetWidth(cvImageBuffer);
    size_t height        = CVPixelBufferGetHeight(cvImageBuffer);

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef    cgContext  = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);

    // release context and colorspace 
    CGContextRelease(cgContext);
    CGColorSpaceRelease(colorSpace);

    // now CGImageRef can be displayed either by setting CALayer content
    // or by creating a [UIImage withCGImage:geImage] that can be displayed on
    // UIImageView ...

The #WWDC14 session 513 ( https://developer.apple.com/videos/wwdc/2014/#513 ) hints that YUV -> RGB colorspace conversion (using CPU?) can be avoided and if YUV capable GLES magic is used - wonder what that might be and how this could be accomplished? #WWDC14会话513( https://developer.apple.com/videos/wwdc/2014/#513 )暗示可以避免YUV-> RGB色彩空间转换(使用CPU?),并且如果使用了YUV功能的GLES magic-想知道这可能是什么以及如何实现?

Apple's iOS SampleCode GLCameraRipple shows an example of displaying YUV CVPixelBufferRef captured from camera using 2 OpenGLES with separate textures for Y and UV components and a fragment shader program that does the YUV to RGB colorspace conversion calculations using GPU - is all that really required, or is there some more straightforward way how this can be done? Apple的iOS SampleCode GLCameraRipple显示了一个示例,该示例显示了使用2个OpenGLES从相机捕获的YUV CVPixelBufferRef,其中两个Y和UV分量具有单独的纹理,而一个片段着色器程序使用GPU进行YUV到RGB色彩空间的转换计算-真正需要的全部就是有一些更直接的方法可以做到这一点吗?

NOTE: In my use case I'm unable to use AVSampleBufferDisplayLayer, due to fact how the input to decompression becomes available. 注意:在我的用例中,由于无法使用解压缩的输入,因此我无法使用AVSampleBufferDisplayLayer。

If you're getting your CVImageBufferRef from CMSampleBufferRef , which you're receiving from captureOutput:didOutputSampleBuffer:fromConnection: , you don't need to make that conversion and can directly get the imageData out of CMSampleBufferRef . 如果从CMSampleBufferRef获取CVImageBufferRef ,而您是从captureOutput:didOutputSampleBuffer:fromConnection:接收的,则无需进行转换,可以直接从CMSampleBufferRef获取imageData。 Here's the code: 这是代码:

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
UIImage *frameImage = [UIImage imageWithData:imageData];

API description doesn't provide any info about wether its 32BGRA supported or not, and produces imageData, along with any meta-data, in jpeg format without any compression applied. API说明不提供有关是否支持其32BGRA的任何信息,并在不应用任何压缩的情况下以jpeg格式生成imageData以及任何元数据。 If your goal is to display the image on screen or use with UIImageView , this is the quick way. 如果您的目标是在屏幕上显示图像或与UIImageView使用,这是一种快速的方法。

Update: The original answer below does not work because kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey is unavailable for iOS. 更新:以下原始答案不起作用,因为kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey对于iOS不可用。


UIView is backed by a CALayer whose contents property supports multiple types of images. UIViewCALayer支持,其contents属性支持多种类型的图像。 As detailed in my answer to a similar question for macOS, it is possible to use CALayer to render a CVPixelBuffer 's backing IOSurface . 正如在针对macOS的类似问题的答案中所详细描述的那样,可以使用CALayer渲染CVPixelBuffer的支持IOSurface (Caveat: I have only tested this on macOS.) (注意:我仅在macOS上对此进行了测试。)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在 iOS 中显示多列表格的最有效方法是什么? - What is the most efficient way to display a table with multiple columns in iOS? 在iOS中追踪时间的最有效/最精确的方法是什么? - What's the most efficient / most precise way to track time in iOS? 使用数据,iOS,Swift 3刷新表的内存效率最高的方法是什么? - What is the most memory efficient way to refresh table with data, iOS, Swift 3? 在 SceneKit iOS 中渲染流式视频的最有效方法是什么? - What is the most efficient way to render a streamed video in SceneKit iOS? 在iOS上缩小图像的内存效率最高的方法是什么? - What is the most memory-efficient way of downscaling images on iOS? 在iOS上将高清视频缩减为SD的最快,最有效的方法是什么? - What is the fastest and most efficient way on iOS to downscale a HD video to SD? 在iOS中获取像素矩形的最有效方法是什么? - What is the most efficient way to grab a rectangle of pixels in ios? 一次登录iOS-最有效的方式? - One time login in iOS - Most Efficient Way? 查询Firebase iOS的最有效方法 - Most Efficient Way to Query Firebase iOS 在iOS应用上保存数据的最有效方法 - Most efficient way to save data on iOS Apps
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM