简体   繁体   English

缩放图像:加速如何才能成为最慢的方法?

[英]Scaling Images: how can the accelerate be the slowest method?

I am testing several methods to rescale a UIImage. 我正在测试几种重新缩放UIImage的方法。

I have tested all these methods posted here and measured the time they take to resize an image. 我测试了这里发布的所有这些方法并测量了调整图像大小所需的时间。

1) UIGraphicsBeginImageContextWithOptions & UIImage -drawInRect: 1)UIGraphicsBeginImageContextWithOptions&UIImage -drawInRect:

let image = UIImage(contentsOfFile: self.URL.path!)

let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0.5))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen

UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))

let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

2) CGBitmapContextCreate & CGContextDrawImage 2)CGBitmapContextCreate和CGContextDrawImage

let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)

let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)

CGContextSetInterpolationQuality(context, kCGInterpolationHigh)

CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)

let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }

3) CGImageSourceCreateThumbnailAtIndex 3)CGImageSourceCreateThumbnailAtIndex

import ImageIO

if let imageSource = CGImageSourceCreateWithURL(self.URL, nil) {
    let options: [NSString: NSObject] = [
        kCGImageSourceThumbnailMaxPixelSize: max(size.width, size.height) / 2.0,
        kCGImageSourceCreateThumbnailFromImageAlways: true
    ]

    let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options).flatMap { UIImage(CGImage: $0) }
}

4) Lanczos Resampling with Core Image 4)Lanczos重新采样核心图像

let image = CIImage(contentsOfURL: self.URL)

let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage

let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))

5) vImage in Accelerate 5)加速中的vImage

let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil, 
    bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue), 
    version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
    sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}

var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }

// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
    destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)

// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }

// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }

// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }

After testing this for hours and measure the time every method took for rescaling the images to 100x100, my conclusions are completely different from NSHipster. 经过几个小时的测试并测量每个方法将图像重新缩放到100x100所花费的时间,我的结论与NSHipster完全不同。 First of all the vImage in accelerate is 200 times slower than the first method, that in my opinion is the poor cousin of the other ones. 首先, vImage in acceleratevImage in accelerate比第一种方法慢200倍,在我看来是其他的表兄弟。 The core image method is also slow. 核心图像方法也很慢。 But I am intrigued how method #1 can smash methods 3, 4 and 5, some of them in theory process stuff on the GPU. 但我很感兴趣的是方法#1可以粉碎方法3,4和5,其中一些在理论上处理GPU上的东西。

Method #3 for example, took 2 seconds to resize a 1024x1024 image to 100x100. 例如,方法#3花了2秒钟将1024x1024图像的大小调整为100x100。 On the other hand #1 took 0.01 seconds! 另一方面#1耗时0.01秒!

Am I missing something? 我错过了什么吗?

Something must be wrong or Apple would not take time to write accelerate and CIImage stuff. 有些事情肯定是错的,或者Apple不会花时间编写加速和CIImage的东西。

NOTE : I am measuring the time from the time the image is already loaded on a variable to the time a scaled version is saved to another variable. 注意 :我正在测量从图像已加载到变量上的时间到将缩放版本保存到另一个变量的时间。 I am not considering the time it takes to read from the file. 我没有考虑从文件中读取所需的时间。

Accelerate can be the slowest method for a variety of reasons: 由于各种原因,加速可能是最慢的方法:

  1. The code you show may spend a lot of time just extracting the data from the CGImage and making a new image. 您显示的代码可能会花费大量时间从CGImage中提取数据并制作新图像。 You didn't, for example, use any features that would allow the CGImage to use your vImage result directly rather than make a copy. 例如,您没有使用任何允许CGImage直接使用您的vImage结果而不是复制的功能。 Possibly a colorspace conversion was also required as part of some of those extract / create CGImage operations. 可能还需要进行颜色空间转换,作为其中一些提取/创建CGImage操作的一部分。 Hard to tell from here. 很难从这里说出来。
  2. Some of the other methods may not have done anything, deferring the work until later when absolutely forced to do it. 其他一些方法可能没有做任何事情,将工作推迟到以后绝对被迫做的时候。 If that was after your end time, then the work wasn't measured. 如果那是在你的结束时间之后,则没有测量工作。
  3. Some of the other methods have the advantage of being able to directly use the contents of the image without having to make a copy first. 一些其他方法具有能够直接使用图像内容而无需首先进行复制的优点。
  4. Different resampling methods (eg Bilinear vs. Lanczos) have different cost 不同的重采样方法(例如Bilinear vs. Lanczos)具有不同的成本
  5. The GPU can actually be faster at some stuff, and resampling is one of the tasks it is specially optimized to do. GPU在某些方面实际上可以更快,重采样是它专门优化的任务之一。 On the flip side, random data access (such as occurs in resampling) is not a nice thing to do to the vector unit. 另一方面,随机数据访问(例如在重新采样中发生)对矢量单元来说不是一件好事。
  6. Timing methods can impact the result. 时序方法会影响结果。 Accelerate is multithreaded. 加速是多线程的。 If you use wall clock time, you will get one answer. 如果您使用挂钟时间,您将得到一个答案。 If you use getrusage or a sampler, you'll get another. 如果你使用getrusage或采样器,你会得到另一个。

If you really think Accelerate is way off the mark here, file a bug. 如果你真的认为Accelerate在这里不合适,请提交一个bug。 I certainly would check with Instruments Time Profile that you are spending the majority of your time in vImageScale in your benchmark loop before doing so, though. 我当然会在使用仪器时间档案时检查你在基准测试循环中花费了大部分时间在vImageScale上,然而这样做。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM