简体   繁体   中英

resize image by UIImage or PHImageManager

What is the difference between the resizing results from the two methods below?

  1. Using UIImage:

     let horizontalRatio = newSize.width / size.width let verticalRatio = newSize.height / size.height let ratio = max(horizontalRatio, verticalRatio) let newSize = CGSize(width: size.width * ratio, height: size.height * ratio) UIGraphicsBeginImageContextWithOptions(newSize, true, 0) draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: newSize)) let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() 
  2. Using PHImageManager (with resizeMode = .exact ) :

    requestImage(for:targetSize:contentMode:options:resultHandler:)

As far as I can tell the second method is only suitable for images of PHAsset class. Is there any other difference in terms of image quality, resizing efficiency, or memory usage?

This question is perhaps best answered by considering purpose .

UIImage and the UIGraphics calls you use it with are for taking a single image at a time (where you already have full-size pixel data) and resizing it. There are also other possibly useful ways to do similar, like with Core Image or Image I/O — which to use sort of depends on where your pixel data is coming from, how much of it you have, and what else you might be doing with it.

The Photos framework, and PHImageManager especially, is designed for apps that want to replicate major chunks of feature-set from the Photos app — for example, building your own photo library browser so that your social networking app can integrate "post an image from the camera roll" into your own UI instead of pulling up a full screen UIImagePickerController .

Apps that want to build photo browsers tend to want thumbnails. And they tend to want those thumbnails cached so they don't have to re-generate them (and re-download full size images from iCloud Photo Library) every time the user brings up the browser. What happened before the Photos framework came along was that every app that wanted its own photo browser generated its own thumbnail cache... so a user could have a 50 GB photo library, plus 1 GB of thumbnails in App A's sandbox, another 1 GB of thumbnails in App B's sandbox, and so on until they run out of local storage. The Photos framework and PHImageManager let things like storage management (including iCloud) and thumbnail generation be managed centrally by the system, so all apps can use the same thumbnail cache, and grab full-size assets only when needed.

Photos uses PHAsset because many of its tasks don't involve directly managing pixel data , they involve managing items in the user's Photos library . You use PHAsset and related API to decide which items you want, then use PHImageManager to get the pixel data for them. And you can request that pixel data at any size — if you only need thumbnails, you might get cached ones (faster, and without managing your own cache).

I can't claim to answer to your question. But my guess is to prefer the requestImage method.

In 1, you need to load into memory the context and the image that will be drawn. After some tests, I notice that iOS does not seem to load the entire image when drawing only part of it or drawing it inside a smaller context, so the drawback is not that big.

But, 2 is kind of display optimized. It may call your result handler block more than once . So the result is quite different.

If you simply need to resize your image, I suggest you a third solution : Image IO, cgimagesourcecreatewithdata which is optimized to do just what you want.

I recommend the Nick Lockwood's talk to you about it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM