简体   繁体   中英

How can Vision be used to identify visible face landmarks?

I've been using Vision to identify Facial Landmarks, using VNDetectFaceLandmarksRequest .

It seems that whenever a face is detected, the resulting VNFaceObservation will always contain all the possible landmarks, and have positions for all of them. It also seems that positions returned for the occluded landmarks are 'guessed' by the framework.

I have tested this using a photo where the subject's face is turned to the left, and the left eye thus isn't visible. Vision returns a left eye landmark, along with a position.

Same thing with the mouth and nose of a subject wearing a N95 face mask, or the eyes of someone wearing opaque sunglasses.

While this can be a useful feature for other use cases, is there a way, using Vision or CIDetector, to figure which face landmarks actually are visible on a photo?

I also tried using CIDetector, but it appears to be able to detect mouths and smiles through N95 masks, so it doesn't seem to be a reliable alternative.

After confirmation from Apple, it appears it simply cannot be done. If Vision detects a face, it will guess some occluded landmarks' positions, and there is no way to differentiate actually detected landmarks from guesses.

For those facing the same issue, a partial way around can be to compare the landmarks' points' positions to those of the median line's and the nose crest's points.

While this can help determine if a facial landmark is occluded by the face itself, it won't help with facial landmarks occluded by opaque sunglasses or face masks.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM