簡體   English   中英

相機實時掃描iOS

[英]Camera live scanning iOS

我正在開發一個iOS應用程序,需要在其中進行一些對象實時掃描。 為此,我需要每秒拍攝3或4幀。 這是我用於創建捕獲會話的代碼:

// Create an AVCaptureSession
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    captureSession.sessionPreset = AVCaptureSessionPresetHigh;

    // Find a suitable AVCaptureDevice
    AVCaptureDevice *photoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    // Create and add an AVCaptureDeviceInput
    NSError *error = nil;
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:photoCaptureDevice error:&error];
    if(videoInput){
        [captureSession addInput:videoInput];
    }

    // Create and add an AVCaptureVideoDataOutput
    AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    // we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
    NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
                                       [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
    [videoOutput setVideoSettings:rgbOutputSettings];

    // Configure your output, and start the session
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [videoOutput setSampleBufferDelegate:self queue:queue];

    if(videoOutput){
        [captureSession addOutput:videoOutput];
    }

    [captureSession startRunning];


    // Setting up the preview layer for the camera
    AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    previewLayer.frame = cameraViewCanvas.bounds;

    // ADDING FINAL VIEW layer TO THE MAIN VIEW sublayer
    [cameraViewCanvas.layer addSublayer:previewLayer];

和在隊列上調用的委托方法:

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    if(isCapturing){
        NSLog(@"output");

        CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
        CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];

        UIImage *newFrame = [[UIImage alloc] initWithCIImage:ciImage];
        [self showImage:newFrame];
    }
}

問題是我無法在屏幕上看到圖像,沒有錯誤和警告,但是沒有顯示圖像。 我的問題是-我走的路正確嗎,我的代碼中需要固定哪些內容才能在屏幕上顯示圖像?

遲來了,但問題可能是由於未在主線程中設置圖像(captureOutput在您創建的單獨調度隊列中調用,很有可能)。

dispatch_async(dispatch_get_main_queue(), ^{
  [self showImage:newFrame];
});

要么

[self performSelectorOnMainThread:@selector(showImage:) newFrame waitUntilDone:YES];

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM