简体   繁体   English

Swift 4:如何使用ios11视觉框架从人脸界标点创建人脸图

[英]Swift 4: How to create a face map with ios11 vision framework from face landmark points

I am using the ios 11 vision framework to yield the face landmark points in real time. 我正在使用ios 11视觉框架实时生成人脸界标点。 I am able to get the face landmark points and overlay the camera layer with the UIBezierPath of the face landmark points. 我能够获取人脸界标点,并使用人脸界标点的UIBezierPath覆盖相机图层。 However, I would like to get something like the bottom right picture. 但是,我想得到右下图。 Currently I have something that looks like the left picture, and I tried looping through the points and adding midpoints, but I don't know how to generate all those triangles from the points. 目前,我有一些看起来像左图的东西,并且尝试遍历这些点并添加中点,但是我不知道如何从这些点生成所有这些三角形。 How would I go about generating the map on the right from the points on the left? 如何从左侧的点生成右侧的地图?

I'm not sure I can with all the points I have, not that it will help too much, but I also have points from the bounding box of the entire face. 我不确定我能拥有的所有点,不是一定会帮上什么大忙,但我也有整个脸部边界框内的点。 Lastly, is there any framework that would allow me to recognize all the points I need, such as openCV or something else, please let me know. 最后,是否有任何框架可以让我认识到我需要的所有要点,例如openCV或其他内容,请告诉我。 Thanks! 谢谢!

face_map

Here is the code I've been using from https://github.com/DroidsOnRoids/VisionFaceDetection : 这是我从https://github.com/DroidsOnRoids/VisionFaceDetection使用的代码:

func detectLandmarks(on image: CIImage) {
    try? faceLandmarksDetectionRequest.perform([faceLandmarks], on: image)
    if let landmarksResults = faceLandmarks.results as? [VNFaceObservation] {

        for observation in landmarksResults {

            DispatchQueue.main.async {
                if let boundingBox = self.faceLandmarks.inputFaceObservations?.first?.boundingBox {
                    let faceBoundingBox = boundingBox.scaled(to: self.view.bounds.size)
                    //different types of landmarks



                    let faceContour = observation.landmarks?.faceContour
                    self.convertPointsForFace(faceContour, faceBoundingBox)

                    let leftEye = observation.landmarks?.leftEye
                    self.convertPointsForFace(leftEye, faceBoundingBox)

                    let rightEye = observation.landmarks?.rightEye
                    self.convertPointsForFace(rightEye, faceBoundingBox)

                    let leftPupil = observation.landmarks?.leftPupil
                    self.convertPointsForFace(leftPupil, faceBoundingBox)

                    let rightPupil = observation.landmarks?.rightPupil
                    self.convertPointsForFace(rightPupil, faceBoundingBox)

                    let nose = observation.landmarks?.nose
                    self.convertPointsForFace(nose, faceBoundingBox)

                    let lips = observation.landmarks?.innerLips
                    self.convertPointsForFace(lips, faceBoundingBox)

                    let leftEyebrow = observation.landmarks?.leftEyebrow
                    self.convertPointsForFace(leftEyebrow, faceBoundingBox)

                    let rightEyebrow = observation.landmarks?.rightEyebrow
                    self.convertPointsForFace(rightEyebrow, faceBoundingBox)

                    let noseCrest = observation.landmarks?.noseCrest
                    self.convertPointsForFace(noseCrest, faceBoundingBox)

                    let outerLips = observation.landmarks?.outerLips
                    self.convertPointsForFace(outerLips, faceBoundingBox)
                }
            }
        }
    }

}

func convertPointsForFace(_ landmark: VNFaceLandmarkRegion2D?, _ boundingBox: CGRect) {
    if let points = landmark?.points, let count = landmark?.pointCount {
        let convertedPoints = convert(points, with: count)



        let faceLandmarkPoints = convertedPoints.map { (point: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat) in
            let pointX = point.x * boundingBox.width + boundingBox.origin.x
            let pointY = point.y * boundingBox.height + boundingBox.origin.y

            return (x: pointX, y: pointY)
        }

        DispatchQueue.main.async {
            self.draw(points: faceLandmarkPoints)
        }
    }
}


func draw(points: [(x: CGFloat, y: CGFloat)]) {
    let newLayer = CAShapeLayer()
    newLayer.strokeColor = UIColor.blue.cgColor
    newLayer.lineWidth = 4.0

    let path = UIBezierPath()
    path.move(to: CGPoint(x: points[0].x, y: points[0].y))
    for i in 0..<points.count - 1 {
        let point = CGPoint(x: points[i].x, y: points[i].y)
        path.addLine(to: point)
        path.move(to: point)
    }
    path.addLine(to: CGPoint(x: points[0].x, y: points[0].y))
    newLayer.path = path.cgPath

    shapeLayer.addSublayer(newLayer)
}

I did end up finding a solution that works. 我最终找到了可行的解决方案。 I used delaunay triangulation via https://github.com/AlexLittlejohn/DelaunaySwift , and I modified it to work with the points generated via the vision framework's face landmark detection request. 我通过https://github.com/AlexLittlejohn/DelaunaySwift使用了delaunay三角剖分,并对其进行了修改,以处理通过视觉框架的人脸界标检测请求生成的点。 This is not easily explained with a code snippet, so I have linked my github repo below that shows my solution. 用代码片段不容易解释这一点,因此我在下面链接了显示我的解决方案的github存储库。 Note that this doesn't get the points from the forehead, as the vision framework only gets points from the eyebrows down. 请注意,这不会从额头得到点,因为视觉框架仅从眉毛向下获得点。

https://github.com/ahashim1/Face https://github.com/ahashim1/Face

What you want in the image on the right is a Candide mesh. 右侧图像中需要的是Candide网格。 You need to map these points to the mesh and that will be it. 您需要将这些点映射到网格,仅此而已。 I don't think you need to go the route that has been discussed in the comments. 我认为您无需遵循评论中已讨论的路线。

PS I found Candide while going through the APK contents of a famous filters app(reminds me of casper) - haven't had the time to try it myself yet. PS:我在浏览著名过滤器应用程序的APK内容时发现了Candide(让我想起了Casper)-还没有时间自己尝试一下。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM