簡體   English   中英

使用AVFoundation捕獲靜止圖像並轉換為UIImage

[英]Capture Still Image with AVFoundation and Convert To UIImage

我不知道如何將它們組合在一起,如何完成這兩項任務。 第一段代碼捕獲一個Image,但是它只是一個圖像緩沖區,不能轉換為UIImage。

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];

    [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
                                                         completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {

                                                             if (imageDataSampleBuffer != NULL) {
                                                                 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

                                                                 UIImage *captureImage = [[UIImage alloc] initWithData:imageData];


                                                             }

                                                             if ([[self delegate] respondsToSelector:@selector(captureManagerStillImageCaptured:)]) {
                                                                 [[self delegate] captureManagerStillImageCaptured:self];
                                                             }
                                                         }];
}

這是從蘋果公司獲取圖像緩沖區並將其轉換為UIImage的示例。 如何結合這兩種方法一起工作?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{

    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);

    if (imageBuffer==NULL) {
        NSLog(@"No buffer");
    }

    // Lock the base address of the pixel buffer
    if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
        NSLog(@"Buffer locked successfully");
    }

    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    NSLog(@"bytes per row %zu",bytesPerRow );
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    NSLog(@"width %zu",width);

    size_t height = CVPixelBufferGetHeight(imageBuffer);
    NSLog(@"height %zu",height);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image= [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);


    return (image );

}

第一段代碼完全滿足您的需求,並且是可以接受的方式。 您想對第二個區塊做什么?

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM