繁体   English   中英

相机实时扫描iOS

[英]Camera live scanning iOS

我正在开发一个iOS应用程序,需要在其中进行一些对象实时扫描。 为此,我需要每秒拍摄3或4帧。 这是我用于创建捕获会话的代码:

// Create an AVCaptureSession
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    captureSession.sessionPreset = AVCaptureSessionPresetHigh;

    // Find a suitable AVCaptureDevice
    AVCaptureDevice *photoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    // Create and add an AVCaptureDeviceInput
    NSError *error = nil;
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:photoCaptureDevice error:&error];
    if(videoInput){
        [captureSession addInput:videoInput];
    }

    // Create and add an AVCaptureVideoDataOutput
    AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    // we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
    NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
                                       [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
    [videoOutput setVideoSettings:rgbOutputSettings];

    // Configure your output, and start the session
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [videoOutput setSampleBufferDelegate:self queue:queue];

    if(videoOutput){
        [captureSession addOutput:videoOutput];
    }

    [captureSession startRunning];


    // Setting up the preview layer for the camera
    AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    previewLayer.frame = cameraViewCanvas.bounds;

    // ADDING FINAL VIEW layer TO THE MAIN VIEW sublayer
    [cameraViewCanvas.layer addSublayer:previewLayer];

和在队列上调用的委托方法:

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    if(isCapturing){
        NSLog(@"output");

        CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
        CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];

        UIImage *newFrame = [[UIImage alloc] initWithCIImage:ciImage];
        [self showImage:newFrame];
    }
}

问题是我无法在屏幕上看到图像,没有错误和警告,但是没有显示图像。 我的问题是-我走的路正确吗,我的代码中需要固定哪些内容才能在屏幕上显示图像?

迟来了,但问题可能是由于未在主线程中设置图像(captureOutput在您创建的单独调度队列中调用,很有可能)。

dispatch_async(dispatch_get_main_queue(), ^{
  [self showImage:newFrame];
});

要么

[self performSelectorOnMainThread:@selector(showImage:) newFrame waitUntilDone:YES];

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM