简体   繁体   English

如何从iPhone相机进行快速图像处理?

[英]How can I do fast image processing from the iPhone camera?

I am trying to write an iPhone application which will do some real-time camera image processing. 我正在尝试编写一个iPhone应用程序,它将进行一些实时的相机图像处理。 I used the example presented in the AVFoundation docs as a starting point: setting a capture session, making a UIImage from the sample buffer data, then drawing an image at a point via -setNeedsDisplay , which I call on the main thread. 我使用AVFoundation文档中提供的示例作为起点:设置捕获会话,从示例缓冲区数据创建UIImage,然后通过-setNeedsDisplay在一个点上绘制图像,我在主线程上调用它。

This works, but it is fairly slow (50 ms per frame, measured between -drawRect: calls, for a 192 x 144 preset) and I've seen applications on the App Store which work faster than this. 这是有效的,但它相当慢(每帧50毫秒,在-drawRect:调用之间测量,对于192 x 144预设),我看到App Store上的应用程序比这更快。
About half of my time is spent in -setNeedsDisplay . 我的大约一半时间花在-setNeedsDisplay

How can I speed up this image processing? 如何加快图像处理速度?

As Steve points out, in my answer here I encourage people to look at OpenGL ES for the best performance when processing and rendering images to the screen from the iPhone's camera. 史蒂夫指出,在我的回答中我鼓励人们在从iPhone的相机处理图像并将图像渲染到屏幕时,查看OpenGL ES以获得最佳性能。 The reason for this is that using Quartz to continually update a UIImage onto the screen is a fairly slow way to send raw pixel data to the display. 原因是使用Quartz不断将UIImage更新到屏幕上是将原始像素数据发送到显示器的一种相当慢的方法。

If possible, I encourage you to look to OpenGL ES to do your actual processing, because of how well-tuned GPUs are for this kind of work. 如果可能的话,我建议您查看OpenGL ES来进行实际处理,因为GPU的调整方式非常适合这类工作。 If you need to maintain OpenGL ES 1.1 compatibility, your processing options are much more limited than with 2.0's programmable shaders, but you can still do some basic image adjustment. 如果您需要保持OpenGL ES 1.1的兼容性,那么您的处理选项比使用2.0的可编程着色器更受限制,但您仍然可以进行一些基本的图像调整。

Even if you're doing all of your image processing using the raw data on the CPU, you'll still be much better off by using an OpenGL ES texture for the image data, updating that with each frame. 即使您使用CPU上的原始数据进行所有图像处理,通过对图像数据使用OpenGL ES纹理,每帧更新一次,您仍然会更好。 You'll see a jump in performance just by switching to that rendering route. 只需切换到渲染路线,您就会看到性能的提升。

(Update: 2/18/2012) As I describe in my update to the above-linked answer, I've made this process much easier with my new open source GPUImage framework. (更新:2012年2月18日)正如我在上述链接答案的更新中描述的那样,我使用新的开源GPUImage框架使这个过程变得更加容易。 This handles all of the OpenGL ES interaction for you, so you can just focus on applying the filters and other effects that you'd like to on your incoming video. 它可以为您处理所有OpenGL ES交互,因此您可以专注于在传入视频上应用过滤器和其他您想要的效果。 It's anywhere from 5-70X faster than doing this processing using CPU-bound routines and manual display updates. 它使用CPU绑定例程和手动显示更新比使用此处理快5-70倍。

Set sessionPresent of capture session to AVCaptureSessionPresetLow as shown in the below sample code, this will increase the processing speed, but image from the buffer will be of low quality. 将sessionPresent的捕获会话设置为AVCaptureSessionPresetLow,如下面的示例代码所示,这将提高处理速度,但缓冲区中的图像质量较差。

- (void)initCapture {
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];
    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init] ;
    captureOutput.alwaysDiscardsLateVideoFrames = YES; 
    captureOutput.minFrameDuration = CMTimeMake(1, 25);
    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 
    self.captureSession = [[AVCaptureSession alloc] init] ;
    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];
    self.captureSession.sessionPreset=AVCaptureSessionPresetLow;
    /*sessionPresent choose appropriate value to get desired speed*/

    if (!self.prevLayer) {
        self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    }
    self.prevLayer.frame = self.view.bounds;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer: self.prevLayer];

}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM