简体   繁体   English

iOS面部探测器方向和CIImage方向的设置

[英]iOS face detector orientation and setting of CIImage orientation

EDIT found this code that helped with front camera images http://blog.logichigh.com/2008/06/05/uiimage-fix/ 编辑发现此代码有助于前置摄像头图像http://blog.logichigh.com/2008/06/05/uiimage-fix/

Hope others have had a similar issue and can help me out. 希望其他人有类似的问题,可以帮助我。 Haven't found a solution yet. 尚未找到解决方案。 (It may seem a bit long but just a bunch of helper code) (它可能看起来有点长,但只是一堆帮助代码)

I'm using the ios face detector on images aquired from the camera (front and back) as well as images from the gallery (I'm using the UIImagePicker - for both image capture by camera and image selection from the gallery - not using avfoundation for taking pictures like in the squarecam demo) 我正在使用ios人脸检测器从相机(正面和背面)获取的图像以及来自图库的图像(我正在使用UIImagePicker - 用于通过相机捕获图像和从图库中选择图像 - 不使用avfoundation用于拍摄像squarecam演示一样的照片)

I am getting really messed up coordinates for the detection (if any) so I wrote a short debug method to get the bounds of the faces as well as a utility that draws a square over them, and i wanted to check for which orientation the detector was working: 我正在弄乱检测的坐标(如果有的话),所以我写了一个简短的调试方法来获取面的边界以及在它们上面绘制正方形的实用程序,我想检查探测器的方向工作:

#define RECTBOX(R)   [NSValue valueWithCGRect:R]
- (NSArray *)detectFaces:(UIImage *)inputimage
{
    _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\];
    NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation
    NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

    CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\];


    // try like this first
    //    NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\];
    // if not working go on to this (trying all orientations)
    NSArray* features;

    int exif;
    // ios face detector. trying all of the orientations
    for (exif = 1; exif <= 8 ; exif++)
    {
        NSNumber *orientation = \[NSNumber numberWithInt:exif\];

        NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

        NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\];

        features = \[self.detector featuresInImage:ciimage options:imageOptions\];

        if (features.count > 0)
        {
            NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\];
                    \[faceDetection log:str\];
            break;
        }
        NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start;
        NSLog(@"faceDetection: facedetection total runtime is %f s",duration);
    }
    if (features.count > 0)
    {
        [faceDetection log:@"-I- Found faces with ios face detector"];
        for(CIFaceFeature *feature in features)
        {
            CGRect rect = feature.bounds;
            CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
            [returnArray addObject:RECTBOX(r)];
        }
        return returnArray;
    } else {
        // no faces from iOS face detector. try OpenCV detector
    }

[1] [1]

After trying tons of different pictures, I noticed that the face detector orientation is not consistent with the camera image property. 在尝试了大量不同的图片后,我注意到面部检测器的方向与相机图像属性不一致。 I took a bunch of photos from the front facing camera where the uiimage orientation was 3 (querying imageOrienation) but the face detector wasn't finding faces for that setting. 我从前置摄像头拍摄了一堆照片,其中uiimage方向为3(查询imageOrienation),但是脸部探测器没有找到该设置的面孔。 When running through all of the exif possibilities, the face detector was finally picking up faces but for a different orientation all together. 当通过所有exif可能性时,面部检测器最终拾取面部,但是一起使用不同的方向。

![1]: http://i.stack.imgur.com/D7bkZ.jpg ![1]: http//i.stack.imgur.com/D7bkZ.jpg

How can I solve this? 我怎么解决这个问题? Is there a mistake in my code? 我的代码中有错误吗?

Another problem I was having, (but closely connected with the face detector), when the face detector picks up faces, but for the "wrong" orientation (happens mostly on front facing camera) the UIImage initially used displays correctly in a uiiimageview, but when I draw a square overlay (I am using opencv in my app so I decided to convert the UIImage to cvmat to draw the overlay with opencv) the whole image is rotated 90 degrees (Only the cvmat image and not the UIImage i initially displayed) 我遇到的另一个问题(但是与面部检测器紧密相关),当面部检测器拾取面部时,但是对于“错误”方向(主要发生在前置摄像头上), UIImage最初在uiiimageview中正确显示,但是当我绘制正方形叠加层时(我在我的应用程序中使用opencv,所以我决定将UIImage转换为cvmat以使用opencv绘制叠加层)整个图像旋转90度(仅限cvmat图像而不是我最初显示的UIImage

The reasoning I can think of here is that the face detector is messing with some buffer (context?) that the UIimage conversion to opencv mat is using. 我在这里可以想到的原因是面部检测器正在弄乱一些缓冲区(上下文?),UIimage转换为opencv mat正在使用它。 How can I seperate these buffers? 我该如何分离这些缓冲区?

The code for converting uiimage to cvmat is (from the "famous" UIImage category someone made): 将uiimage转换为cvmat的代码是(来自某人制作的“着名” UIImage类别):

-(cv::Mat)CVMat
{

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
                                                    cols, // Width of bitmap
                                                    rows, // Height of bitmap
                                                    8, // Bits per component
                                                    cvMat.step[0], // Bytes per row
                                                    colorSpace, // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1)
    {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else
    {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                            cvMat.rows,                                     // Height
                                            8,                                              // Bits per component
                                            8 * cvMat.elemSize(),                           // Bits per pixel
                                            cvMat.step[0],                                  // Bytes per row
                                            colorSpace,                                     // Colorspace
                                            kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                            provider,                                       // CGDataProviderRef
                                            NULL,                                           // Decode
                                            false,                                          // Should interpolate
                                            kCGRenderingIntentDefault);                     // Intent   

     self = [self initWithCGImage:imageRef];
     CGImageRelease(imageRef);
     CGDataProviderRelease(provider);
     CGColorSpaceRelease(colorSpace);

     return self;
 }  

 -(cv::Mat)CVRgbMat
 {
     cv::Mat tmpimage = self.CVMat;
     cv::Mat image;
     cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR);
     return image;
 }

 - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo {
    self.prevImage = img;
 //   self.previewView.image = img;
    NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img];
    for (id r in arr)
    {
         CGRect rect = RECTUNBOX(r);
         //self.previewView.image = img;
         self.previewView.image = [utils drawSquareOnImage:img square:rect];
    }
    [self.imgPicker dismissModalViewControllerAnimated:YES];
    return;
}

I don't think it's a good idea to rotate whole bunch of image pixels and match the CIFaceFeature. 我不认为旋转整堆图像像素并匹配CIFaceFeature是个好主意。 You can imagine redrawing at the rotated orientation is very heavy. 你可以想象在旋转方向上重绘是非常沉重的。 I had the same problem, and I solved it by converting the coordinate system of the CIFaceFeature with respect to the UIImageOrientation. 我有同样的问题,我通过转换CIFaceFeature的坐标系相对于UIImageOrientation来解决它。 I extended the CIFaceFeature class with some conversion methods to get the correct point locations and bounds with respect to the UIImage and its UIImageView (or the CALayer of a UIView). 我使用一些转换方法扩展了CIFaceFeature类,以获得关于UIImage及其UIImageView(或UIView的CALayer)的正确点位置和边界。 The complete implementation is posted here: https://gist.github.com/laoyang/5747004 . 完整的实施发布在这里: https//gist.github.com/laoyang/5747004 You can use directly. 你可以直接使用。

Here is the most basic conversion for a point from CIFaceFeature, the returned CGPoint is converted based on image's orientation: 这是CIFaceFeature中一个点的最基本转换,返回的CGPoint根据图像的方向转换:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {

    CGFloat imageWidth = image.size.width;
    CGFloat imageHeight = image.size.height;

    CGPoint convertedPoint;

    switch (image.imageOrientation) {
        case UIImageOrientationUp:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDown:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeft:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        case UIImageOrientationRight:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationUpMirrored:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDownMirrored:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeftMirrored:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationRightMirrored:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        default:
            break;
    }
    return convertedPoint;
}

And here are the category methods based on the above conversion: 以下是基于上述转换的类别方法:

// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;

// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;

// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;

(Another thing need to notice is specifying the correct EXIF orientation when extracting the face features from UIImage orientation. Quite confusing... here is what I did: (需要注意的另一件事是在从UIImage方向提取面部特征时指定正确的EXIF方向。相当令人困惑......这就是我所做的:

int exifOrientation;
switch (self.image.imageOrientation) {
    case UIImageOrientationUp:
        exifOrientation = 1;
        break;
    case UIImageOrientationDown:
        exifOrientation = 3;
        break;
    case UIImageOrientationLeft:
        exifOrientation = 8;
        break;
    case UIImageOrientationRight:
        exifOrientation = 6;
        break;
    case UIImageOrientationUpMirrored:
        exifOrientation = 2;
        break;
    case UIImageOrientationDownMirrored:
        exifOrientation = 4;
        break;
    case UIImageOrientationLeftMirrored:
        exifOrientation = 5;
        break;
    case UIImageOrientationRightMirrored:
        exifOrientation = 7;
        break;
    default:
        break;
}

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
                                          options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];

)

iOS 10 and Swift 3 iOS 10和Swift 3

You can check apple example you can detect face or value of barcode and Qrcode 您可以查看苹果示例,您可以检测 条形码Qrcode的 面部或值

https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM