简体   繁体   English

如何识别iOS上脸部的嘴和牙齿?

[英]How can I recognize a mouth and teeth within a face on iOS?

I know that Core Image on iOS 5.0 supports facial detection ( another example of this ), which gives the overall location of a face, as well as the location of eyes and a mouth within that face. 我知道iOS 5.0上的Core Image 支持面部检测另一个例子 ),它可以提供面部的整体位置,以及眼睛和嘴部的位置。

However, I'd like to refine this location to detect the position of a mouth and teeth within it. 但是,我想改进这个位置,以检测其中的嘴和牙齿的位置。 My goal is to place a mouth guard over a user's mouth and teeth. 我的目标是在用户的嘴和牙齿上放一个护齿罩。

Is there a way to accomplish this on iOS? 有没有办法在iOS上实现这一目标?

I pointed in my blog that tutorial has something wrong. 我在博客中指出教程有问题。

Part 5) Adjust For The Coordinate System : Says you need to change window's and images's coordinates but that is what you shouldn't do. 5部分)调整坐标系 :说您需要更改窗口和图像的坐标,但这是您不应该做的。 You shouldn't change your views/windows (in UIKit coordinates) to match CoreImage coordinates as in the tutorial, you should do the other way around. 您不应该像在教程中那样更改视图/窗口(在UIKit坐标中)以匹配CoreImage坐标,您应该采取相反的方式。

This is the part of code relevant to do that: 这是与此相关的代码部分:
(You can get whole sample code from my blog post or directly from here . It contains this and other examples using CIFilters too :D ) (您可以从我的博客文章或直接从这里获得完整的示例代码。它包含此示例和使用CIFilters的其他示例:D)

// Create the image and detector
CIImage *image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                          context:...
                                          options:...];

// CoreImage coordinate system origin is at the bottom left corner and UIKit's
// is at the top left corner. So we need to translate features positions before
// drawing them to screen. In order to do so we make an affine transform
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform,
                                       0, -imageView.bounds.size.height);

// Get features from the image
NSArray *features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features) {

    // Get the face rect: Convert CoreImage to UIKit coordinates
    const CGRect faceRect = CGRectApplyAffineTransform(
                              faceFeature.bounds, transform);

    // create a UIView using the bounds of the face
    UIView *faceView = [[UIView alloc] initWithFrame:faceRect];

    ...

    if(faceFeature.hasMouthPosition) {
        // Get the mouth position translated to imageView UIKit coordinates
        const CGPoint mouthPos = CGPointApplyAffineTransform(
                                   faceFeature.mouthPosition, transform);
        ...
    }
}

Once you get the mouth position ( mouthPos ) you simply place your thing on or near it. 一旦你得到嘴巴位置( mouthPos ),你只需将你的东西放在它上面或附近。

This certain distance could be calculated experimentally and must be relative to the triangle formed by the eyes and the mouth. 该特定距离可以通过实验计算,并且必须相对于由眼睛和嘴形成的三角形。 I would use a lot of faces to calculate this distance if possible (Twitter avatars?) 如果可能的话,我会用很多面孔计算这个距离(推特头像?)

Hope it helps :) 希望能帮助到你 :)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM