简体   繁体   中英

I am using CIFilter to get a blur image,but why is the output image always larger than input image?

Codes are as below:

CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: @"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, @"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];

For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000)

Any advice is appreciated.

This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. It is not, it is a "recipe" for an image. So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. That estimated size to draw the entire image is called the "extent". In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. It is up to you to determine which part of the extent is useful to your drawing routine.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM