简体   繁体   中英

What is the most efficient way to display CVImageBufferRef on iOS

I have CMSampleBufferRef(s) which I decode using VTDecompressionSessionDecodeFrame which results in CVImageBufferRef after decoding of a frame has completed, so my questions is..

What would be the most efficient way to display these CVImageBufferRefs in UIView?

I have succeeded in converting CVImageBufferRef to CGImageRef and displaying those by settings CGImageRef as CALayer's content but then DecompressionSession has been configured with @{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };

Here is example/code how I've converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data has to be in 32BGRA format for this to work)

    CVPixelBufferLockBaseAddress(cvImageBuffer,0);
    // get image properties 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer);
    size_t bytesPerRow   = CVPixelBufferGetBytesPerRow(cvImageBuffer);
    size_t width         = CVPixelBufferGetWidth(cvImageBuffer);
    size_t height        = CVPixelBufferGetHeight(cvImageBuffer);

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef    cgContext  = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);

    // release context and colorspace 
    CGContextRelease(cgContext);
    CGColorSpaceRelease(colorSpace);

    // now CGImageRef can be displayed either by setting CALayer content
    // or by creating a [UIImage withCGImage:geImage] that can be displayed on
    // UIImageView ...

The #WWDC14 session 513 ( https://developer.apple.com/videos/wwdc/2014/#513 ) hints that YUV -> RGB colorspace conversion (using CPU?) can be avoided and if YUV capable GLES magic is used - wonder what that might be and how this could be accomplished?

Apple's iOS SampleCode GLCameraRipple shows an example of displaying YUV CVPixelBufferRef captured from camera using 2 OpenGLES with separate textures for Y and UV components and a fragment shader program that does the YUV to RGB colorspace conversion calculations using GPU - is all that really required, or is there some more straightforward way how this can be done?

NOTE: In my use case I'm unable to use AVSampleBufferDisplayLayer, due to fact how the input to decompression becomes available.

If you're getting your CVImageBufferRef from CMSampleBufferRef , which you're receiving from captureOutput:didOutputSampleBuffer:fromConnection: , you don't need to make that conversion and can directly get the imageData out of CMSampleBufferRef . Here's the code:

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
UIImage *frameImage = [UIImage imageWithData:imageData];

API description doesn't provide any info about wether its 32BGRA supported or not, and produces imageData, along with any meta-data, in jpeg format without any compression applied. If your goal is to display the image on screen or use with UIImageView , this is the quick way.

Update: The original answer below does not work because kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey is unavailable for iOS.


UIView is backed by a CALayer whose contents property supports multiple types of images. As detailed in my answer to a similar question for macOS, it is possible to use CALayer to render a CVPixelBuffer 's backing IOSurface . (Caveat: I have only tested this on macOS.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM