简体   繁体   English

应用CIFIlter后设置UIImageView内容模式

[英]setting UIImageView content mode after applying a CIFIlter

Thanks for looking. 谢谢你的期待。

Here's my code 这是我的代码

 CIImage *result = _vignette.outputImage;
self.mainImageView.image = nil;
//self.mainImageView.contentMode = UIViewContentModeScaleAspectFit;
self.mainImageView.image = [UIImage imageWithCIImage:result];
self.mainImageView.contentMode = UIViewContentModeScaleAspectFit;

in here _vignette is correctly set up filter and image effect is applying to the image correctly. 在这里_vignette正确设置过滤器,图像效果正确应用于图像。

I'm using a source image with resolution 500x375. 我正在使用分辨率为500x375的源图像。 My imageView has almost iPhone screen's resolution. 我的imageView几乎拥有iPhone屏幕的分辨率。 So to avoid stretching I'm using AspectFit. 所以为了避免拉伸我正在使用AspectFit。

But after applying effect when I'm assigning the result image back to my imageView it streches. 但是在我将结果图像分配回我的imageView后应用效果后,它会拉伸。 No matter which UIViewContentMode I use. 无论我使用哪种UIViewContentMode It doesn't work. 它不起作用。 It seems it always applies ScaleToFill regardless the filter I've given. 无论我给出的过滤器如何,它似乎总是应用ScaleToFill

Any idea why is this happening? 知道为什么会这样吗? Any suggestion is highly appreciated. 任何建议都非常感谢。

(1) Aspect Fit does stretch the image - to fit. (1)Aspect Fit 确实拉伸图像 - 适合。 If you don't want the image stretched at all, use Center (for example). 如果您根本不想拉伸图像,请使用中心(例如)。

(2) imageWithCIImage gives you a very weird beast, a UIImage not based on CGImage, and so not susceptible to the normal rules of layer display. (2) imageWithCIImage为您提供了一个非常奇怪的野兽,一个不基于CGImage的UIImage,因此不易受到图层显示的正常规则的影响。 It is really nothing but a thin wrapper around CIImage, which is not what you want. 它实际上只不过是CIImage的薄包装,这不是你想要的。 You must convert (render) the CIFilter output thru CGImage to UIImage, thus giving you a UIImage that actually has some bits (CGImage, a bitmap). 你必须通过CGImage将CIFilter输出转换(渲染)到UIImage,从而为你提供一个实际上有一些位的UIImage(CGImage,一个位图)。 My discussion here gives you code that demonstrates: 我在这里的讨论为您提供了演示代码:

http://www.apeth.com/iOSBook/ch15.html#_cifilter_and_ciimage http://www.apeth.com/iOSBook/ch15.html#_cifilter_and_ciimage

In other words, at some point you must call CIContext createCGImage:fromRect: to generate a CGImageRef from the output of your CIFilter, and pass that on into a UIImage. 换句话说,在某些时候,您必须调用CIContext createCGImage:fromRect:从CIFilter的输出生成CGImageRef,并将其传递给UIImage。 Until you do that, you don't have the output of your filter operations as a real UIImage. 在您这样做之前,您没有将过滤器操作的输出作为真正的UIImage。

Alternatively, you can draw the image from imageWithCIImage into a graphics context. 或者,您可以将imageWithCIImage的图像imageWithCIImage到图形上下文中。 For example, you can draw it into an image graphics context and then use that image. 例如,您可以将其绘制到图像图形上下文中,然后使用图像。

What you can't do is display the image from imageWithCIImage directly. 不能做的是直接从imageWithCIImage显示图像。 That's because it isn't an image! 那是因为它不是图像! It has no underlying bitmap (CGImage). 它没有底层位图(CGImage)。 There's no there there. 那里没有。 All it is is a set of CIFilter instructions for deriving the image. 它只是一组用于导出图像的CIFilter指令。

I just spent all day on this. 我只花了一整天的时间。 I was getting orientation problems followed by poor quality output. 我得到了定向问题,然后输出质量差。

After snapping an image using the camera, I do this. 使用相机拍摄图像后,我这样做。

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
                UIImage *image = [[UIImage alloc] initWithData:imageData];

This causes a rotation problem so I needed to do this to it. 这会导致旋转问题,所以我需要这样做。

UIImage *scaleAndRotateImage(UIImage *image)
{
    int kMaxResolution = image.size.height; // Or whatever

    CGImageRef imgRef = image.CGImage;

    CGFloat width = CGImageGetWidth(imgRef);
    CGFloat height = CGImageGetHeight(imgRef);

    CGAffineTransform transform = CGAffineTransformIdentity;
    CGRect bounds = CGRectMake(0, 0, width, height);
    if (width > kMaxResolution || height > kMaxResolution) {
        CGFloat ratio = width/height;
        if (ratio > 1) {
            bounds.size.width = kMaxResolution;
            bounds.size.height = bounds.size.width / ratio;
        }
        else {
            bounds.size.height = kMaxResolution;
            bounds.size.width = bounds.size.height * ratio;
        }
    }

    CGFloat scaleRatio = bounds.size.width / width;
    CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
    CGFloat boundHeight;
    UIImageOrientation orient = image.imageOrientation;
    switch(orient) {

        case UIImageOrientationUp: //EXIF = 1
            transform = CGAffineTransformIdentity;
            break;

        case UIImageOrientationUpMirrored: //EXIF = 2
            transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            break;

        case UIImageOrientationDown: //EXIF = 3
            transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;

        case UIImageOrientationDownMirrored: //EXIF = 4
            transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
            transform = CGAffineTransformScale(transform, 1.0, -1.0);
            break;

        case UIImageOrientationLeftMirrored: //EXIF = 5
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
            break;

        case UIImageOrientationLeft: //EXIF = 6
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
            break;

        case UIImageOrientationRightMirrored: //EXIF = 7
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeScale(-1.0, 1.0);
            transform = CGAffineTransformRotate(transform, M_PI / 2.0);
            break;

        case UIImageOrientationRight: //EXIF = 8
            boundHeight = bounds.size.height;
            bounds.size.height = bounds.size.width;
            bounds.size.width = boundHeight;
            transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
            transform = CGAffineTransformRotate(transform, M_PI / 2.0);
            break;

        default:
            [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];

    }

    UIGraphicsBeginImageContext(bounds.size);

    CGContextRef context = UIGraphicsGetCurrentContext();

    if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
        CGContextScaleCTM(context, -scaleRatio, scaleRatio);
        CGContextTranslateCTM(context, -height, 0);
    }
    else {
        CGContextScaleCTM(context, scaleRatio, -scaleRatio);
        CGContextTranslateCTM(context, 0, -height);
    }

    CGContextConcatCTM(context, transform);

    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
    UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();  

    return imageCopy;  
}

Then when I apply my filter, I do this (with special thanks to this thread - specifically matt and Wex) 然后,当我应用我的过滤器时,我这样做(特别感谢这个线程 - 特别是亚光和Wex)

    -(UIImage*)processImage:(UIImage*)image {

            CIImage *inImage = [CIImage imageWithCGImage:image.CGImage];

            CIFilter *filter = [CIFilter filterWithName:@"CIColorControls" keysAndValues:
                        kCIInputImageKey, inImage,
                        @"inputContrast", [NSNumber numberWithFloat:1.0],
                        nil];

    UIImage *outImage = [filter outputImage];

//Juicy bit
            CGImageRef cgimageref = [[CIContext contextWithOptions:nil] createCGImage:outImage fromRect:[outImage extent]];

            return [UIImage imageWithCGImage:cgimageref];
        }

This is an answer for Swift, inspired by this question and the solution by user1951992 . 这是Swift的答案,受到这个问题user1951992解决方案的启发

let imageView = UIImageView(frame: CGRect(x: 100, y: 200, width: 100, height: 50))
imageView.contentMode = .scaleAspectFit

//just an example for a CIFilter
//NOTE: we're using implicit unwrapping here because we're sure there is a filter
//(and an image) named exactly this
let filter = CIFilter(name: "CISepiaTone")!
let image = UIImage(named: "image")!
filter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
filter.setValue(0.5, forKey: kCIInputIntensityKey)

guard let outputCGImage = filter?.outputImage,
    let outputImage = CIContext().createCGImage(outputCGImage, from: outputCGImage.extent) else {
        return
}

imageView.image = UIImage(cgImage: outputImage)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM