简体   繁体   English

AVCapture会话捕获图像SWIFT

[英]AVCapture Session To Capture Image SWIFT

I have created an AVCaptureSession to capture video output and display it to the user via UIView. 我创建了一个AVCaptureSession来捕获视频输出并通过UIView将其显示给用户。 Now I want to be able to click a button (takePhoto method) and display the image from the session in an UIImageView. 现在我希望能够单击按钮(takePhoto方法)并在UIImageView中显示会话中的图像。 I have tried to iterate through each devices connection and try to save the output but that hasnt worked. 我试图迭代每个设备连接并尝试保存输出但是没有用。 The code I have is below 我的代码如下

let captureSession = AVCaptureSession()
var stillImageOutput: AVCaptureStillImageOutput!

@IBOutlet var imageView: UIImageView!
@IBOutlet var cameraView: UIView!


// If we find a device we'll store it here for later use
var captureDevice : AVCaptureDevice?

override func viewDidLoad() {
    // Do any additional setup after loading the view, typically from a nib.
    super.viewDidLoad()
    println("I AM AT THE CAMERA")
    captureSession.sessionPreset = AVCaptureSessionPresetLow
    self.captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
    if(captureDevice != nil){
        beginSession()
    }
}
    func beginSession() {

    self.stillImageOutput = AVCaptureStillImageOutput()
    self.captureSession.addOutput(self.stillImageOutput)
    var err : NSError? = nil
    self.captureSession.addInput(AVCaptureDeviceInput(device: self.captureDevice, error: &err))

    if err != nil {
        println("error: \(err?.localizedDescription)")
    }

    var previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
    self.cameraView.layer.addSublayer(previewLayer)
    previewLayer?.frame = self.cameraView.layer.frame
    captureSession.startRunning()
}

@IBAction func takePhoto(sender: UIButton) {
    self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo)) { (buffer:CMSampleBuffer!, error:NSError!) -> Void in
        var image = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
        var data_image = UIImage(data: image)
        self.imageView.image = data_image
    }
}
}

You should try adding a new thread when adding input and outputs to the session before starting it. 在启动之前向会话添加输入和输出时,应尝试添加新线程。 In Apple's documentation they state 在Apple的文档中,他们说

Important: The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive). 重要提示:startRunning方法是一个阻塞调用,可能需要一些时间,因此您应该在串行队列上执行会话设置,以便不阻止主队列(这使UI保持响应)。 See AVCam for iOS for the canonical implementation example. 有关规范实现示例,请参阅AVCam for iOS。

Try using a dispatch in the create session method such as below 尝试在create session方法中使用调度,如下所示

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), { // 1
        self.captureSession.addOutput(self.stillImageOutput)
        self.captureSession.addInput(AVCaptureDeviceInput(device: self.captureDevice, error: &err))
        self.captureSession.sessionPreset = AVCaptureSessionPresetPhoto
        if err != nil {
            println("error: \(err?.localizedDescription)")
        }
        var previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
        previewLayer?.frame = self.cameraView.layer.bounds
        previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        dispatch_async(dispatch_get_main_queue(), { // 2
                    // 3
            self.cameraView.layer.addSublayer(previewLayer)
            self.captureSession.startRunning()
            });
        });

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM