简体   繁体   English

我正在使用CIFilter来获取模糊图像,但是为什么输出图像总是大于输入图像?

[英]I am using CIFilter to get a blur image,but why is the output image always larger than input image?

Codes are as below: 代码如下:

CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: @"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, @"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];

For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000) 例如,输入图像的大小为(640.000000,1136.000000),而输出图像的大小为(700.000000,1196.000000)

Any advice is appreciated. 任何建议表示赞赏。

This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. 这是对您的问题的最新回答,但是主要问题是您正在将CIImage视为图像。 It is not, it is a "recipe" for an image. 不是,它是图像的“配方”。 So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. 因此,当您对其应用模糊滤镜时,Core Image会计算出该图像以显示模糊的每个最后像素,您将需要一个更大的画布。 That estimated size to draw the entire image is called the "extent". 用来绘制整个图像的估计大小称为“范围”。 In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. 从本质上讲,每个像素都在“散布”,这意味着最终范围将大于原始画布。 It is up to you to determine which part of the extent is useful to your drawing routine. 由您决定范围的哪一部分对您的绘图例程有用。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM