简体   繁体   中英

Capture Still Image with AVFoundation and Convert To UIImage

I have the pieces together on how to accomplish both of these tasks im just not sure how to put them together. The first block of code captures an Image, however it is only a image buffer and not something I can convert to a UIImage.

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];

    [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
                                                         completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {

                                                             if (imageDataSampleBuffer != NULL) {
                                                                 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

                                                                 UIImage *captureImage = [[UIImage alloc] initWithData:imageData];


                                                             }

                                                             if ([[self delegate] respondsToSelector:@selector(captureManagerStillImageCaptured:)]) {
                                                                 [[self delegate] captureManagerStillImageCaptured:self];
                                                             }
                                                         }];
}

Here is from an apple example of taking an image buffer and having it be converted to a UIImage. How do I combine these two methods to work together?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{

    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);

    if (imageBuffer==NULL) {
        NSLog(@"No buffer");
    }

    // Lock the base address of the pixel buffer
    if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
        NSLog(@"Buffer locked successfully");
    }

    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    NSLog(@"bytes per row %zu",bytesPerRow );
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    NSLog(@"width %zu",width);

    size_t height = CVPixelBufferGetHeight(imageBuffer);
    NSLog(@"height %zu",height);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image= [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);


    return (image );

}

The first block of code does exactly what you need and is an acceptable way of doing it. What are you trying to do with the second block?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM