简体   繁体   中英

Cropping UIImage not yielding expected crop - swift?

I am trying to crop an image but the crop is not yielding the expected portion of the UIImage. The images appear out of orientation and mirror-ed. It is very confusing.

 @IBOutlet weak var imgPreViewOutlet: UIImageView!

 guard
        let cgImage = self.imgPreViewOutlet.image.cgImage else {return}


// NORMALISE COORDINATES
let topXn = TOP_LEFT.x/screenWidth
let topYn = TOP_LEFT.y/screenHeight
let widthn = (TOP_RIGHT.x - TOP_LEFT.x)/screenWidth
let heightn = (BOTTOM_RIGHT.y - TOP_RIGHT.y)/screenHeight


// DIMENSION OF CGIMAGE
let cgImgWidth = cgImage.width
let cgImgHeight = cgImage.height

let cropRect = CGRect.init(x: topXn * CGFloat.init(widthn) , y: topYn * CGFloat.init(heightn), width: widthn * CGFloat.init(cgImgWidth), height: heightn * CGFloat.init(cgImgHeight))

if let cgCropImged = cgImage.cropping(to: cropRect){

        print("cropRect: \(cropRect)")

        self.imgPreViewOutlet.image = UIImage.init(cgImage: cgCropImged)
    }

在此处输入图片说明

CROPPED:

在此处输入图片说明

Core graphics coordinates are from an origin point at the bottom left, UIKit coordinates are from top left. I think you're confusing the two.

This might help:

How to compensate the flipped coordinate system of core graphics for easy drawing?

As requested, here's an example of using CIPerspectiveCorrection to crop your image. I downloaded and used Sketch to get the approximate values of the 4 CGPoints in your example image.

    let inputBottomLeft = CIVector(x: 38, y: 122)
    let inputTopLeft = CIVector(x: 68, y: 236)
    let inputTopRight = CIVector(x: 146, y: 231)
    let inputBottomRight = CIVector(x: 151, y: 96)

    let filter = CIFilter(name: "CIPerspectiveCorrection")
    filter?.setValue(inputTopLeft, forKey: "inputTopLeft")
    filter?.setValue(inputTopRight, forKey: "inputTopRight")
    filter?.setValue(inputBottomLeft, forKey: "inputBottomLeft")
    filter?.setValue(inputBottomRight, forKey: "inputBottomRight")

    filter?.setValue(ciOriginal, forKey: "inputImage")
    let ciOutput = filter?.outputImage

Please note a few things:

  • The most important thing to never forget about CoreImage is that the origin of a CIImage is bottom left, not top left. You need to "flip" the Y axis point.
  • A UIImage has a size, while a CIImage has an extent. These are the same. (The only time it isn't is when using a CIFIlter to "create" something - a color, a tiled image - and then it's infinite.)
  • A CIVector can have quite a few properties. In this case I'm using the X/Y signature, and it's a straight copy of a CGPoint except the Y axis is flipped.
  • I have a sample project here that uses a UIImageView . Be aware that performance on the simulator is nowhere near what performance is on a real device - I recommend using a device anytime CoreImage is involved. Also, I did a straight translation from the output CIImage to a UIImage. It usually better to use a CIContext and CoreGraphics if you are looking for performance.

Given your input, here's the output:

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM