简体   繁体   中英

Screen capture with three embedded controllers where one is a UIImagePicker in video mode?

I need to perform a live screen capture of the entire iPhone screen. The screen has three container views embedded. One of these containers is a UIImagePickerController. Everything on the screen captures beautifully, but the one container that has the UIImagePickerController is black. I need the whole screen capture so the continuity of operation looks seamless. Is there a way to capture what is currently shown on the screen from a UIImagePickerController? Below is the code I am using to capture the screen image.

I have also tried Apple's Technical Q&A QA1703 .

UIGraphicsBeginImageContextWithOptions(myView.bounds.size, YES, 0.0f);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Thanks in advance for any help!

I've had a similar problem before when trying to capture a screenshot containing both a GLKView and a UIImagePickerController. Sometimes I would get a black screen, other times I would get complaints about an invalid context (when using code similar to yours). I couldn't find a solution, so instead I implemented an AVFoundation camera and haven't looked back since. Here's some quick source code to help you out.

ViewController.h

// Frameworks
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <UIKit/UIKit.h>

@interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>

// Camera
@property (strong, nonatomic) AVCaptureSession* captureSession;
@property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer;
@property (strong, nonatomic) UIImage* cameraImage;

@end

ViewController.m

#import "CameraViewController.h"

@implementation CameraViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    [self setupCamera];
}

- (void)setupCamera
{    
    AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
    AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
    output.alwaysDiscardsLateVideoFrames = YES;

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];

    NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey;
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    [output setVideoSettings:videoSettings];

    self.captureSession = [[AVCaptureSession alloc] init];
    [self.captureSession addInput:input];
    [self.captureSession addOutput:output];
    [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];

    self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    // CHECK FOR YOUR APP
    self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width);
    self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight;
    // CHECK FOR YOUR APP

    [self.view.layer insertSublayer:self.previewLayer atIndex:0];

    [self.captureSession startRunning];
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);

    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);

    self.cameraImage = [UIImage imageWithCGImage:newImage];

    CGImageRelease(newImage);

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}

// Call whenever you need a snapshot
- (UIImage *)snapshot
{
    NSLog(@"SNAPSHOT");
    return self.cameraImage;
}

@end

This code captures the input image according to the selected preset (in this case, photo: 852x640), so if you want to capture it along with the view I would recommend the following options:

  1. Scale, crop, and translate your image after capture. Pros: Camera still runs smoothly. Cons: More code
  2. Add a UIImageView instead of the previewLayer , which updates its image in the captureOutput delegate. Pros: WYSIWYG. Cons: May cause your camera to run slower.

In both cases above you would need to merge the resulting capture with your other images after taking a screenshot (not as hard as it sounds).

AVFoundation and its associated frameworks are quite daunting, so this is a very lean implementation to get what you're after. If you want more details please check the following examples:

Hope that helps!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM