简体   繁体   English

我无法让 vImage(加速框架)将 420Yp8_Cb8_Cr8(平面)转换为 ARGB8888

[英]I can't get vImage (Accelerate Framework) to convert 420Yp8_Cb8_Cr8 (planar) to ARGB8888

I'm trying to convert Planar YpCbCr to RGBA and it's failing with error kvImageRoiLargerThanInputBuffer.我正在尝试将平面 YpCbCr 转换为 RGBA,但由于错误 kvImageRoiLargerThanInputBuffer 而失败。 I tried two different ways.我尝试了两种不同的方法。 Here're some code snippets.这是一些代码片段。 Note 'thumbnail_buffers + 1' and 'thumbnail_buffers + 2' have width and height half of 'thumbnail_buffers + 0' because I'm dealing with 4:2:0 and have (1/2)*(1/2) as many chroma samples each as luma samples.注意 'thumbnail_buffers + 1' 和 'thumbnail_buffers + 2' 的宽度和高度是 'thumbnail_buffers + 0' 的一半,因为我处理的是 4:2:0 并且有 (1/2)*(1/2) 一样多的色度每个样本作为亮度样本。 This silently fails (even though I asked for an explanation (kvImagePrintDiagnosticsToConsole).这默默地失败了(即使我要求解释(kvImagePrintDiagnosticsToConsole)。

error = vImageConvert_YpCbCrToARGB_GenerateConversion( 
        kvImage_YpCbCrToARGBMatrix_ITU_R_709_2,
        &fullrange_8bit_clamped_to_fullrange,
        &convertInfo, 
        kvImage420Yp8_Cb8_Cr8, kvImageARGB8888, 
        kvImagePrintDiagnosticsToConsole);



uint8_t BGRA8888_permuteMap[4] = {3, 2, 1, 0};
uint8_t alpha = 255;

vImage_Buffer dest;
error = vImageConvert_420Yp8_Cb8_Cr8ToARGB8888( 
        thumbnail_buffers + 0, thumbnail_buffers + 1, thumbnail_buffers + 2,
        &dest,
        &convertInfo, BGRA8888_permuteMap, alpha, 
        kvImagePrintDiagnosticsToConsole //I don't think this flag works here
        );

So I tried again with vImageConvert_AnyToAny:所以我再次尝试使用 vImageConvert_AnyToAny:

vImage_CGImageFormat cg_BGRA8888_format = {
    .bitsPerComponent = 8,
    .bitsPerPixel = 32,
    .colorSpace = baseColorspace,
    .bitmapInfo = 
        kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
    .version = 0,
    .decode = (CGFloat*)0,
    .renderingIntent = kCGRenderingIntentDefault
};


vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(
    kCVPixelFormatType_420YpCbCr8Planar,
    kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,
    kCVImageBufferChromaLocation_Center,
    baseColorspace,
    0);


vImageConverterRef icref = vImageConverter_CreateForCVToCGImageFormat( 
    vformat,
    &cg_BGRA8888_format,
    (CGFloat[]){0, 0, 0},
    kvImagePrintDiagnosticsToConsole,
    &error );


vImage_Buffer dest;
error = vImageBuffer_Init( &dest, image_height, image_width, 8, kvImagePrintDiagnosticsToConsole);
error = vImageConvert_AnyToAny( icref, thumbnail_buffers, &dest, (void*)0, kvImagePrintDiagnosticsToConsole); //kvImageGetTempBufferSize

I get the same error but this time I get the following message printed to the console.我遇到了同样的错误,但这次我在控制台上打印了以下消息。

<Error>: kvImagePrintDiagnosticsToConsole: vImageConvert_AnyToAny: srcs[1].height must be >= dests[0].height

But this doesn't make any sense to me.但这对我来说没有任何意义。 How can my Cb height be anything other than half my Yp height (which is the same as my dest RGB height) when I've got 4:2:0 data?当我有 4:2:0 数据时,我的 Cb 高度怎么能不是 Yp 高度的一半(与我的 dest RGB 高度相同)? (Likewise for width?) What on earth am I doing wrong? (宽度也是如此?)我到底做错了什么? I'm going to be doing other conversions as well (4:4:4, 4:2:2, etc) so any clarification on these APIs would be greatly appreciated.我还将进行其他转换(4:4:4、4:2:2 等),因此对这些 API 的任何说明都将不胜感激。 Further, what is my siting supposed to be for these conversions?此外,对于这些转换,我的站点应该是什么? Above I use kCVImageBufferChromaLocation_Center.上面我使用 kCVImageBufferChromaLocation_Center。 Is that correct?那是对的吗?

Some new info: Since posting this I saw a glaring error, but fixing it didn't help.一些新信息:自从发布此信息后,我看到了一个明显的错误,但修复它并没有帮助。 Notice that in the vImageConvert_AnyToAny case above, I initialized the destination buffer with just the image width instead of 4*width to make room for RGBA.请注意,在上面的 vImageConvert_AnyToAny 情况下,我仅使用图像宽度而不是 4*width 初始化目标缓冲区,以便为 RGBA 腾出空间。 That must be the problem, right?那一定是问题吧? Nope.不。

Notice further that in the vImageConvert_* case, I didn't initialize the destination buffer at all.进一步注意,在 vImageConvert_* 的情况下,我根本没有初始化目标缓冲区。 Fixed that too and it didn't help.也修复了它,但没有帮助。

So far I've tried the conversion six different ways choosing one from (vImageConvert_* | vImageConvert_AnyToAny) and choosing one from (kvImage420Yp8_Cb8_Cr8 | kvImage420Yp8_CbCr8 | kvImage444CrYpCb8) feeding the appropriate number of input buffers each time--carefully checking that the buffers take into account the number of samples per pixel per plane.到目前为止,我已经尝试了六种不同的转换方式,从 (vImageConvert_* | vImageConvert_AnyToAny) 中选择一种,并从 (kvImage420Yp8_Cb8_Cr8 | kvImage420Yp8_CbCr8 | kvImage444CrYpCb8) 中选择一种,提供适当数量的输入缓冲区,每次都仔细检查缓冲区每个平面每个像素的样本数。 Each time I get:每次我得到:

<Error>: kvImagePrintDiagnosticsToConsole: vImageConvert_AnyToAny: srcs[0].width must be >= dests[0].width

which makes no sense to me.这对我来说毫无意义。 If my luma plane is say 100 wide, my RGBA buffer should be 400 wide.如果我的亮度平面是 100 宽,我的 RGBA 缓冲区应该是 400 宽。 Please any guidance or working code going from YCC to RGBA would be greatly appreciated.请从 YCC 到 RGBA 的任何指导或工作代码将不胜感激。

Okay, I figured it out--part user error, part Apple bug.好的,我想通了——部分是用户错误,部分是 Apple 错误。 I was thinking of the vImage_Buffer 's width and height wrong.我在想vImage_Buffer的宽度和高度是错误的。 For example, the output buffer I specified as 4 * image_width , and 8 bits per pixel, when it should have been simply image_width and 32 bits per pixel--the same amount of memory but sending the wrong message to the APIs.例如,我指定的输出缓冲区为4 * image_width和每像素 8 位,而它应该只是image_width和每像素 32 位——相同的内存量,但向 API 发送了错误的消息。 The literal '8' on that line blinded me from remembering what that slot was, I guess.我猜,那条线上的文字“8”让我无法记住那个插槽是什么。 A lesson I must have learned many times--name your magic numbers.我一定学过很多次的教训——说出你的神奇数字。

Anyway, now the bug part.无论如何,现在是错误部分。 Making the input and output buffers correct with regards to width, height, pixel depth fixed all the calls to the low-level vImageConvert_420Yp8_Cb8_Cr8ToARGB8888 and friends.使输入和输出缓冲区在宽度、高度、像素深度方面正确,修复了对低级vImageConvert_420Yp8_Cb8_Cr8ToARGB8888和朋友的所有调用。 For example in the planar YCC case, your Cb and Cr buffers would naturally have half the width and half the height of the Yp plane.例如,在平面 YCC 情况下,您的 Cb 和 Cr 缓冲区自然具有 Yp 平面的一半宽度和一半高度。 However, in the vImageConvert_AnyToAny cases these buffers caused the calls to fail and bail--saying silly things like I needed my Cb plane to have the same dimensions as my Yp plane even for 4:2:0.然而,在vImageConvert_AnyToAny情况下,这些缓冲区导致调用失败和保释——说一些愚蠢的事情,比如我需要我的 Cb 平面与我的 Yp 平面具有相同的尺寸,即使是 4:2:0。 This appears to be a bug in some preflighting done by Apple before calling the lower-level code that does the work.这似乎是 Apple 在调用执行工作的低级代码之前完成的一些预检中的错误。

I worked around the vImageConvert_AnyToAny bug by simply making input buffers that were too big and only filling Cb and Cr data in the top-left quadrant.我通过简单地制作太大的输入缓冲区并且只填充左上象限中的 Cb 和 Cr 数据来解决vImageConvert_AnyToAny错误。 The data were found there during the conversion just fine.在转换过程中在那里发现数据就好了。 I made these too-big buffers with vImageBuffer_Init() where Apple allocated the too-big malloc that goes to waste.我使用vImageBuffer_Init()制作了这些太大的缓冲区,其中 Apple 分配了浪费的太大的 malloc。 I didn't try making the vImage_Buffer 's by hand--lying about the size and allocating just the memory I need.我没有尝试手工制作vImage_Buffer的 - 谎称大小并仅分配我需要的内存。 This may work, or perhaps Apple will crawl off into the weeds trusting the width and height.这可能会奏效,或者苹果可能会因为信任宽度和高度而爬进杂草中。 If you hand make one, you better tell the truth about the rowBytes however.如果你手工制作一个,你最好说出关于rowBytes的真相。

I'm going to leave this answer for a bit before marking it correct, hoping someone at Apple sees this and fixes the bug and perhaps gets inspired to improve the documentation for those of us stumbling about.在将其标记为正确之前,我将保留这个答案,希望 Apple 的某个人看到这一点并修复该错误,并且可能会受到启发,为我们这些绊脚石的人改进文档。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM