简体   繁体   English

ios OpenCV增加图像大小

[英]ios OpenCV increases the image size

Below is code for converting UIImage to cv::Mat 下面是将UIImage转换为cv::Mat代码

+ (cv::Mat)cvMatFromUIImage:(UIImage *)image {

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    CGFloat cols, rows;

    if (image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationRight) {
        cols = image.size.height;
        rows = image.size.width;
    } else {
        cols = image.size.width;
        rows = image.size.height;
    }

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels    
    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,             // Pointer to backing data
                                                cols,                       // Width of bitmap
                                                rows,                       // Height of bitmap
                                                8,                          // Bits per component
                                                cvMat.step[0],              // Bytes per row
                                                colorSpace,                 // Colorspace
                                                kCGImageAlphaNoneSkipLast |
                                                kCGBitmapByteOrderDefault);

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);

    cv::Mat cvMatTest;
    cv::transpose(cvMat, cvMatTest);

    if (image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationRight) {
    } else {
        return cvMat;       
    }
    cvMat.release();    
    cv::flip(cvMatTest, cvMatTest, 1);
    return cvMatTest;
}

And this code for cv::Mat to UIImage 和这段代码为cv::MatUIImage

+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {

    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];    
    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                    cvMat.rows,                                     // Height
                                    8,                                              // Bits per component
                                    8 * cvMat.elemSize(),                           // Bits per pixel
                                    cvMat.step[0],                                  // Bytes per row
                                    colorSpace,                                     // Colorspace
                                    kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                    provider,                                       // CGDataProviderRef
                                    NULL,                                           // Decode
                                    false,                                          // Should interpolate
                                    kCGRenderingIntentDefault);                     // Intent

    UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);    
    return image;
}

I convert a 1080*1920 (1.5 mb) of image to cv::Mat after some preprocessing i convert it to UIImage which gives me image of size 2517 * 1527(6 mb) 经过一些预处理后,我将1080 * 1920(1.5 mb)的图像转换为cv::Mat我将其转换为UIImage ,这使我得到的图像尺寸为2517 * 1527(6 mb)

I don't want to increase the image size even after image processing. 即使在图像处理后,我也不想增加图像尺寸。 Please guide me where i am doing wrong 请指导我我做错了什么

//Crop Action //作物行动

 cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC4);
cv::Mat original = [MMOpenCVHelper cvMatFromUIImage:_adjustedImage];


cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));

Your code is not resizing the image. 您的代码没有调整图像大小。 It must be an unintended side effect of the preprocessing which you don't detail here. 这一定是预处理的意外副作用,在此不做详细介绍。

 UIImage* image = self.testImage;
 NSLog(@"original UIImage: %.0f %.0f", image.size.width,image.size.height);

 cv::Mat matImage = [self cvMatFromUIImage:image];
 NSLog(@"cv matImage: %d %d", matImage.cols,matImage.rows);

 UIImage* newImage = [self UIImageFromCVMat:matImage];
 NSLog(@"new UIImage: %.0f %.0f", newImage.size.width,newImage.size.height);
 original UIImage: 720 960 cv matImage: 720 960 new UIImage: 720 960 

edit 编辑
following your expanded question, it looks as if the output of warpPerspective is sized to cvSize(maxWidth,maxHeight) - so that is the size you will get. 在扩展问题之后,似乎warpPerspective的输出大小为cvSize(maxWidth,maxHeight) -这就是您将得到的大小。 If you want to end up with the input size, you can resize() before converting back to UIImage . 如果要结束输入的大小,可以在转换回UIImage之前先进行resize() Or simply set the output mat to the same size as the input. 或者简单地将输出mat设置为与输入相同的尺寸。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM