简体   繁体   English

使用相机快速进行实时人脸跟踪4

[英]Real time face tracking with camera in swift 4

I want to be able to track a users face from the camera feed. 我希望能够从相机Feed中跟踪用户的脸部。 I have looked at this SO post. 我看过这篇 SO帖子。 I used the code given in the answer but it did not seem to do anything. 我使用了答案中给出的代码,但似乎没有做任何事情。 I have heard that 我听说过

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)

has been changed to something else in swift 4. Could this be the problem with the code? 在swift 4中已被更改为其他内容。这可能是代码的问题吗?

While face tracking I want to also monitor face landmarks with CIFaceFeature. 在面部追踪的同时,我还希望使用CIFaceFeature监控面部地标。 How would I do this? 我该怎么做?

I have found a starting point here: https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision . 我在这里找到了一个起点: https//github.com/jeffreybergier/Blog-Getting-Started-with-Vision

Basically you can instatiate a video capture session declaring a lazy variable like this: 基本上你可以实现一个视频捕获会话来声明一个像这样的惰性变量:

private lazy var captureSession: AVCaptureSession = {
    let session = AVCaptureSession()
    session.sessionPreset = AVCaptureSession.Preset.photo
    guard
        let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
        let input = try? AVCaptureDeviceInput(device: frontCamera)
        else { return session }
    session.addInput(input)
    return session
}()

Then inside viewDidLoad you start the session 然后在viewDidLoad启动会话

self.captureSession.startRunning()

And finally you can perform your requests inside 最后你可以在里面执行你的请求

func captureOutput(_ output: AVCaptureOutput, 
    didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}

for example: 例如:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: 
    CMSampleBuffer, from connection: AVCaptureConnection) {
    guard
        // make sure the pixel buffer can be converted
        let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        else { return }

    let faceRequest = VNDetectFaceRectanglesRequest(completionHandler: self.faceDetectedRequestUpdate)

    // perform the request
    do {
        try self.visionSequenceHandler.perform([faceRequest], on: pixelBuffer)
    } catch {
        print("Throws: \(error)")
    }
}

And then you define your faceDetectedRequestUpdate function. 然后定义faceDetectedRequestUpdate函数。

Anyway I have to say that I haven't been able to figure out how to create a working example from here. 无论如何,我不得不说我无法弄清楚如何从这里创建一个工作示例。 The best working example I have found is in Apple's documentation: https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time 我找到的最好的工作示例是在Apple的文档中: https//developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM