简体   繁体   中英

GussianBlur image with scaleAspectFill

I want to use Gaussianblur on an image, but also i want to use my imageview scalemode's scaleAspectFill .

I am blurring my image with following code:

func getImageWithBlur(image: UIImage) -> UIImage?{
    let context = CIContext(options: nil)

    guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
        return nil
    }
    let beginImage = CIImage(image: image)
    currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
    currentFilter.setValue(6.5, forKey: "inputRadius")
    guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: output.extent) else {
        return nil
    }
    return UIImage(cgImage: cgimg)
}

But this is not working with scaleAspectFill mode.

在此处输入图片说明

They are both same images. But when i blur the second image, as you can see it is adding space from top and bottom. What should i do for fit well when using blur image too?

When you apply a CIGaussianBlur filter, the resulting image is larger than the original. This is because the blur is applied to the edges.

To get back an image at the original size, you need to use the original image extent.

Note, though, the blur is applied both inside and outside the edge, so if you clip only to the original extent, the edge will effectively "fade out". To avoid the edges altogether, you'll need to clip farther in.

Here is an example, using a UIImage extension to blur either with or without blurred edges:

extension UIImage {

    func blurredImageWithBlurredEdges(inputRadius: CGFloat) -> UIImage? {

        guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
            return nil
        }
        guard let beginImage = CIImage(image: self) else {
            return nil
        }
        currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
        currentFilter.setValue(inputRadius, forKey: "inputRadius")
        guard let output = currentFilter.outputImage else {
            return nil
        }

        // UIKit and UIImageView .contentMode doesn't play well with
        // CIImage only, so we need to back the return UIImage with a CGImage
        let context = CIContext()

        // cropping rect because blur changed size of image
        guard let final = context.createCGImage(output, from: beginImage.extent) else {
            return nil
        }

        return UIImage(cgImage: final)

    }

    func blurredImageWithClippedEdges(inputRadius: CGFloat) -> UIImage? {

        guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
            return nil
        }
        guard let beginImage = CIImage(image: self) else {
            return nil
        }
        currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
        currentFilter.setValue(inputRadius, forKey: "inputRadius")
        guard let output = currentFilter.outputImage else {
            return nil
        }

        // UIKit and UIImageView .contentMode doesn't play well with
        // CIImage only, so we need to back the return UIImage with a CGImage
        let context = CIContext()

        // cropping rect because blur changed size of image

        // to clear the blurred edges, use a fromRect that is
        // the original image extent insetBy (negative) 1/2 of new extent origins
        let newExtent = beginImage.extent.insetBy(dx: -output.extent.origin.x * 0.5, dy: -output.extent.origin.y * 0.5)
        guard let final = context.createCGImage(output, from: newExtent) else {
            return nil
        }
        return UIImage(cgImage: final)

    }

}

and here is an example View Controller showing how to use it, and the different results:

class BlurTestViewController: UIViewController {

    let imgViewA = UIImageView()
    let imgViewB = UIImageView()
    let imgViewC = UIImageView()

    override func viewDidLoad() {
        super.viewDidLoad()

        let stackView = UIStackView()
        stackView.axis = .vertical
        stackView.alignment = .fill
        stackView.distribution = .fillEqually
        stackView.spacing = 8
        stackView.translatesAutoresizingMaskIntoConstraints = false

        view.addSubview(stackView)

        NSLayoutConstraint.activate([

            stackView.widthAnchor.constraint(equalToConstant: 200.0),
            stackView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
            stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor),

        ])

        [imgViewA, imgViewB, imgViewC].forEach { v in
            v.backgroundColor = .red
            v.contentMode = .scaleAspectFill
            v.clipsToBounds = true
            // square image views (1:1 ratio)
            v.heightAnchor.constraint(equalTo: v.widthAnchor, multiplier: 1.0).isActive = true
            stackView.addArrangedSubview(v)
        }

    }

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)

        guard let imgA = UIImage(named: "bkg640x360") else {
            fatalError("Could not load image!")
        }

        guard let imgB = imgA.blurredImageWithBlurredEdges(inputRadius: 6.5) else {
            fatalError("Could not create Blurred image with Blurred Edges")
        }

        guard let imgC = imgA.blurredImageWithClippedEdges(inputRadius: 6.5) else {
            fatalError("Could not create Blurred image with Clipped Edges")
        }

        imgViewA.image = imgA
        imgViewB.image = imgB
        imgViewC.image = imgC

    }

}

Using this original 640x360 image, with 200 x 200 image views:

在此处输入图片说明

We get this output:

在此处输入图片说明

Also worth mentioning - although I'm sure you've already noticed - these functions run very slowly on the simulator, but very quickly on an actual device.

I believe your issue is that the convolution kernel of the CIFilter is creating additional data as it applies the blur to the edges of the image. The CIContext isn't a strictly bounded space and is able to use area around the image to fully process all output. So rather than using output.extent in createCGImage, use the size of the input image (converted to a CGRect).

To account for the blurred alpha channel along the image edge, you can use the CIImage.unpremultiplyingAlpha().settingAlphaOne() methods to flatten the image before returning.

func getImageWithBlur(image: UIImage) -> UIImage? {

    let context = CIContext(options: nil)

    guard let currentFilter = CIFilter(name: "CIGaussianBlur") else { return nil }

    let beginImage = CIImage(image: image)
    currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
    currentFilter.setValue(6.5, forKey: "inputRadius")

    let rect = CGRect(x: 0.0, y: 0.0, width: image.size.width, height: image.size.height)

    guard let output = currentFilter.outputImage?.unpremultiplyingAlpha().settingAlphaOne(in: rect) else { return nil }
    guard let cgimg = context.createCGImage(output, from: rect) else { return nil }

    print("image.size:    \(image.size)")
    print("output.extent: \(output.extent)")

    return UIImage(cgImage: cgimg)

}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM