简体   繁体   English

如何使用带有核心图形的Accelerate Framework?

[英]How do I use the Accelerate Framework With Core Graphics?

I have a project. 我有一个项目。 It basically takes photo from iPhone camera and applies some effects on photo. 它基本上是从iPhone相机拍照并对照片应用了一些效果。 Before I apply effect I use core graphics to scale image to appropriate size. 在我应用效果之前,我使用核心图形将图像缩放到合适的大小。 After scaling and rotating image I use Accelerate framework(vImage) in order to create effect. 缩放和旋转图像后,我使用Accelerate框架(vImage)来创建效果。 My problem is after applying effect it is ended up some bluish image. 我的问题是在应用效果之后它最终变成了一些蓝色的图像。 However If I don't scale image with core graphics this bluish looks doesn't happen. 但是,如果我不使用核心图形缩放图像,则这种偏蓝的外观不会发生。

Scaling code that I use is from this post. 我使用的缩放代码来自这篇文章。

And here is my code that applies effect: 这是我的代码应用效果:

- (UIImage *)applyFiltertoImage:(UIImage *)img
{
    CGImageRef image = img.CGImage;
    vImage_Buffer inBuffer, outBuffer;
    void *pixelBuffer;

    CGDataProviderRef inProvider = CGImageGetDataProvider(image);
    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);

    inBuffer.width = CGImageGetWidth(image);
    inBuffer.height = CGImageGetHeight(image);
    inBuffer.rowBytes = CGImageGetBytesPerRow(image);

    inBuffer.data = (void *)CFDataGetBytePtr(inBitmapData);

    pixelBuffer = malloc(CGImageGetBytesPerRow(image) * CGImageGetHeight(image));

    if (pixelBuffer == NULL) {
        NSLog(@"No buffer");
    }

    outBuffer.data = pixelBuffer;
    outBuffer.width = CGImageGetWidth(image);
    outBuffer.height = CGImageGetHeight(image);
    outBuffer.rowBytes = CGImageGetBytesPerRow(image);

    vImageConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, self.kernel, self.size, self.size, self.divisor, NULL, kvImageEdgeExtend);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
                                             outBuffer.width,
                                             outBuffer.height,
                                             8,
                                             outBuffer.rowBytes,
                                             colorSpace,
                                             kCGImageAlphaNoneSkipLast);

    CGImageRef imageRef = CGBitmapContextCreateImage(ctx);

    UIImage *blurredImage = [UIImage imageWithCGImage:imageRef];

    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
    free(pixelBuffer);
    CFRelease(inBitmapData);
    CGImageRelease(imageRef);

    return blurredImage;
}

Avoiding manual re-definition of CGContext 避免手动重新定义CGContext

Try letting vImage initialize the values for you. 尝试让vImage为您初始化值。 vImageBuffer_InitWithCGImage can help you avoid some pain. vImageBuffer_InitWithCGImage可以帮助您避免一些痛苦。

The straight forward version 直截了当的版本

- (UIImage *)applyFiltertoImage:(UIImage *)image
    CGImageRef originalImageRef = image.CGImage;
    CGColorSpaceRef originalColorSpace = CGColorSpaceRetain(CGImageGetColorSpace(originalImageRef));

    if (_pixelBuffer == NULL) {
        _pixelBuffer = malloc(CGImageGetBytesPerRow(originalImageRef) * CGImageGetHeight(originalImageRef));
    }

    vImage_CGImageFormat inputImageFormat =
    {
        .bitsPerComponent = (uint32_t) CGImageGetBitsPerComponent(originalImageRef),
        .bitsPerPixel = (uint32_t) CGImageGetBitsPerComponent(originalImageRef) * (uint32_t)(CGColorSpaceGetNumberOfComponents(originalColorSpace) + (kCGImageAlphaNone != CGImageGetAlphaInfo(originalImageRef))),
        .colorSpace = originalColorSpace,
        .bitmapInfo = CGImageGetBitmapInfo(originalImageRef),
        .version = 0,
        .decode = NULL,
        .renderingIntent = kCGRenderingIntentDefault
    };
    vImage_Buffer inputImageBuffer;
    vImageBuffer_InitWithCGImage(&inputImageBuffer, &inputImageFormat, NULL, originalImageRef, kvImageNoFlags);

    vImage_Buffer outputImageBuffer = {
        .data = _pixelBuffer,
        .width = CGImageGetWidth(originalImageRef),
        .height = CGImageGetHeight(originalImageRef),
        .rowBytes = CGImageGetBytesPerRow(originalImageRef)
    };

    vImage_Error error;
    error = vImageConvolve_ARGB8888(&inputImageBuffer,
                                    &outputImageBuffer,
                                    NULL,
                                    0,
                                    0,
                                    self.kernel,
                                    self.size,
                                    self.divisor,
                                    NULL,
                                    kvImageEdgeExtend);
    if (error) {
        NSLog(@"vImage error %zd", error);
    }
    free(inputImageBuffer.data);

    vImage_CGImageFormat outFormat =
    {
        .bitsPerComponent = (uint32_t) CGImageGetBitsPerComponent(originalImageRef),
        .bitsPerPixel = (uint32_t) CGImageGetBitsPerComponent(originalImageRef) * (uint32_t)(CGColorSpaceGetNumberOfComponents(originalColorSpace) + (kCGImageAlphaNone != CGImageGetAlphaInfo(originalImageRef))),
        .colorSpace = originalColorSpace,
        .bitmapInfo = CGImageGetBitmapInfo(originalImageRef),
        .version = 0,
        .decode = NULL,
        .renderingIntent = kCGRenderingIntentDefault
    };
    CGImageRef modifiedImageRef = vImageCreateCGImageFromBuffer(&outputImageBuffer,
                                                                &outFormat,
                                                                NULL,
                                                                NULL,
                                                                kvImageNoFlags,
                                                                &error);
    CGColorSpaceRelease(originalColorSpace);

    UIImage *returnImage = [UIImage imageWithCGImage:modifiedImageRef];
    CGImageRelease(modifiedImageRef);

    return returnImage;
}

Higher-performance edition 更高性能的版本

Create the _inputImageBuffer, _outputImageBuffer, and _outputImageFormat one-time per image and then just reapply the filter to the source image. 每个图像一次创建_inputImageBuffer,_outputImageBuffer和_outputImageFormat,然后只需将过滤器重新应用到源图像。 Once vImage warms up it will start shaving off several milliseconds from the call. 一旦vImage变暖,它将从呼叫开始削减几毫秒。

- (UIImage *)applyFilter
    vImage_Error error;
    error = vImageConvolve_ARGB8888(&_inputImageBuffer,
                                    &_outputImageBuffer,
                                    NULL,
                                    0,
                                    0,
                                    self.kernel,
                                    self.size,
                                    self.divisor,
                                    NULL,
                                    kvImageEdgeExtend);
    if (error) {
        NSLog(@"vImage error %zd", error);
    }

    CGImageRef modifiedImageRef = vImageCreateCGImageFromBuffer(&_outputImageBuffer,
                          &_outputImageFormat,
                          NULL,
                          NULL,
                          kvImageNoFlags,
                          &error);    
    UIImage *returnImage = [UIImage imageWithCGImage:modifiedImageRef];
    CGImageRelease(modifiedImageRef);

    return returnImage;
}

Usually, a strong color tint means that the color channel order was lost in translation somewhere along the way, for example you created a CG image with BGRA data, but it was actually ARGB. 通常,强烈的色调意味着颜色通道顺序在沿途的某处翻译时丢失,例如您创建了带有BGRA数据的CG图像,但它实际上是ARGB。

Have you looked at vImage_Utilities.h? 你看过vImage_Utilities.h吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM