[英]Scaling Images: how can the accelerate be the slowest method?
I am testing several methods to rescale a UIImage. 我正在测试几种重新缩放UIImage的方法。
I have tested all these methods posted here and measured the time they take to resize an image. 我测试了这里发布的所有这些方法,并测量了调整图像大小所需的时间。
1) UIGraphicsBeginImageContextWithOptions & UIImage -drawInRect: 1)UIGraphicsBeginImageContextWithOptions&UIImage -drawInRect:
let image = UIImage(contentsOfFile: self.URL.path!)
let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0.5))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
2) CGBitmapContextCreate & CGContextDrawImage 2)CGBitmapContextCreate和CGContextDrawImage
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage
let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)
let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
3) CGImageSourceCreateThumbnailAtIndex 3)CGImageSourceCreateThumbnailAtIndex
import ImageIO
if let imageSource = CGImageSourceCreateWithURL(self.URL, nil) {
let options: [NSString: NSObject] = [
kCGImageSourceThumbnailMaxPixelSize: max(size.width, size.height) / 2.0,
kCGImageSourceCreateThumbnailFromImageAlways: true
]
let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options).flatMap { UIImage(CGImage: $0) }
}
4) Lanczos Resampling with Core Image 4)Lanczos重新采样核心图像
let image = CIImage(contentsOfURL: self.URL)
let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage
let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))
5) vImage in Accelerate 5)加速中的vImage
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage
// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil,
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue),
version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}
var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }
// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)
// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }
// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }
// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }
After testing this for hours and measure the time every method took for rescaling the images to 100x100, my conclusions are completely different from NSHipster. 经过几个小时的测试并测量每个方法将图像重新缩放到100x100所花费的时间,我的结论与NSHipster完全不同。 First of all the
vImage in accelerate
is 200 times slower than the first method, that in my opinion is the poor cousin of the other ones. 首先,
vImage in accelerate
的vImage in accelerate
比第一种方法慢200倍,在我看来是其他的表兄弟。 The core image method is also slow. 核心图像方法也很慢。 But I am intrigued how method #1 can smash methods 3, 4 and 5, some of them in theory process stuff on the GPU.
但我很感兴趣的是方法#1可以粉碎方法3,4和5,其中一些在理论上处理GPU上的东西。
Method #3 for example, took 2 seconds to resize a 1024x1024 image to 100x100. 例如,方法#3花了2秒钟将1024x1024图像的大小调整为100x100。 On the other hand #1 took 0.01 seconds!
另一方面#1耗时0.01秒!
Am I missing something? 我错过了什么吗?
Something must be wrong or Apple would not take time to write accelerate and CIImage stuff. 有些事情肯定是错的,或者Apple不会花时间编写加速和CIImage的东西。
NOTE : I am measuring the time from the time the image is already loaded on a variable to the time a scaled version is saved to another variable. 注意 :我正在测量从图像已加载到变量上的时间到将缩放版本保存到另一个变量的时间。 I am not considering the time it takes to read from the file.
我没有考虑从文件中读取所需的时间。
Accelerate can be the slowest method for a variety of reasons: 由于各种原因,加速可能是最慢的方法:
If you really think Accelerate is way off the mark here, file a bug. 如果你真的认为Accelerate在这里不合适,请提交一个bug。 I certainly would check with Instruments Time Profile that you are spending the majority of your time in vImageScale in your benchmark loop before doing so, though.
我当然会在使用仪器时间档案时检查你在基准测试循环中花费了大部分时间在vImageScale上,然而这样做。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.