简体   繁体   中英

ios OpenCV increases the image size

Below is code for converting UIImage to cv::Mat

+ (cv::Mat)cvMatFromUIImage:(UIImage *)image {

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    CGFloat cols, rows;

    if (image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationRight) {
        cols = image.size.height;
        rows = image.size.width;
    } else {
        cols = image.size.width;
        rows = image.size.height;
    }

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels    
    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,             // Pointer to backing data
                                                cols,                       // Width of bitmap
                                                rows,                       // Height of bitmap
                                                8,                          // Bits per component
                                                cvMat.step[0],              // Bytes per row
                                                colorSpace,                 // Colorspace
                                                kCGImageAlphaNoneSkipLast |
                                                kCGBitmapByteOrderDefault);

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);

    cv::Mat cvMatTest;
    cv::transpose(cvMat, cvMatTest);

    if (image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationRight) {
    } else {
        return cvMat;       
    }
    cvMat.release();    
    cv::flip(cvMatTest, cvMatTest, 1);
    return cvMatTest;
}

And this code for cv::Mat to UIImage

+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {

    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];    
    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                    cvMat.rows,                                     // Height
                                    8,                                              // Bits per component
                                    8 * cvMat.elemSize(),                           // Bits per pixel
                                    cvMat.step[0],                                  // Bytes per row
                                    colorSpace,                                     // Colorspace
                                    kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                    provider,                                       // CGDataProviderRef
                                    NULL,                                           // Decode
                                    false,                                          // Should interpolate
                                    kCGRenderingIntentDefault);                     // Intent

    UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);    
    return image;
}

I convert a 1080*1920 (1.5 mb) of image to cv::Mat after some preprocessing i convert it to UIImage which gives me image of size 2517 * 1527(6 mb)

I don't want to increase the image size even after image processing. Please guide me where i am doing wrong

//Crop Action

 cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC4);
cv::Mat original = [MMOpenCVHelper cvMatFromUIImage:_adjustedImage];


cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));

Your code is not resizing the image. It must be an unintended side effect of the preprocessing which you don't detail here.

 UIImage* image = self.testImage;
 NSLog(@"original UIImage: %.0f %.0f", image.size.width,image.size.height);

 cv::Mat matImage = [self cvMatFromUIImage:image];
 NSLog(@"cv matImage: %d %d", matImage.cols,matImage.rows);

 UIImage* newImage = [self UIImageFromCVMat:matImage];
 NSLog(@"new UIImage: %.0f %.0f", newImage.size.width,newImage.size.height);
 original UIImage: 720 960 cv matImage: 720 960 new UIImage: 720 960 

edit
following your expanded question, it looks as if the output of warpPerspective is sized to cvSize(maxWidth,maxHeight) - so that is the size you will get. If you want to end up with the input size, you can resize() before converting back to UIImage . Or simply set the output mat to the same size as the input.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM