简体   繁体   English

使用 renderscript 将相机 YUV 数据转换为 ARGB

[英]Converting camera YUV-data to ARGB with renderscript

My Problem is: I've set up a camera in Android and receive the preview data by using an onPreviewFrame-listener which passes me an byte[] array containing the image data in the default android YUV-format (device does not support R5G6B5-format).我的问题是:我在 Android 中设置了一个摄像头,并使用 onPreviewFrame-listener 接收预览数据,它向我传递一个字节 [] 数组,其中包含默认 android YUV 格式的图像数据(设备不支持 R5G6B5-格式)。 Each pixel consists of 12bits which makes the thing a little tricky.每个像素由 12 位组成,这让事情变得有点棘手。 Now what I want to do is converting the YUV-data into ARGB-data in order to do image processing with it.现在我要做的是将 YUV 数据转换为 ARGB 数据,以便用它进行图像处理。 This has to be done with renderscript, in order to maintain a high performance.这必须通过 renderscript 来完成,以保持高性能。

My idea was to pass two pixels in one element (which would be 24bits = 3 bytes) and then return two ARGB pixels.我的想法是在一个元素中传递两个像素(即 24 位 = 3 个字节),然后返回两个 ARGB 像素。 The problem is, that in Renderscript a u8_3 (a 3dimensional 8bit vector) is stored in 32bit, which means that the last 8 bits are unused.问题是,在 Renderscript 中,u8_3(一个 3 维 8 位向量)以 32 位存储,这意味着最后 8 位未使用。 But when copying the image data into the allocation all of the 32bits are used, so the last 8bit get lost.但是当将图像数据复制到分配中时,所有 32 位都被使用,所以最后 8 位丢失了。 Even if I used a 32bit input data, the last 8bit are useless, because they're only 2/3 of a pixel.即使我使用 32 位输入数据,最后 8 位也没有用,因为它们只有 2/3 个像素。 When defining an element consisting a 3-byte-array it actually has a real size of 3 bytes.当定义一个包含 3 字节数组的元素时,它实际上具有 3 个字节的实际大小。 But then the Allocation.copyFrom()-method doesn't fill the in-Allocation with data, argueing it doesn't has the right data type to be filled with a byte[].但是 Allocation.copyFrom() 方法没有用数据填充 in-Allocation,认为它没有正确的数据类型来填充 byte[]。

The renderscript documentation states, that there is a ScriptIntrinsicYuvToRGB which should do exactly that in API Level 17. But in fact the class doesn't exist. renderscript 文档指出,有一个 ScriptIntrinsicYuvToRGB 应该在 API 级别 17 中执行此操作。但实际上 class 不存在。 I've downloaded API Level 17 even though it seems not to be downloadable any more.我已经下载了 API Level 17,尽管它似乎无法再下载了。 Does anyone have any information about it?有人有关于它的任何信息吗? Does anyone have ever tried out a ScriptIntrinsic?有没有人尝试过 ScriptIntrinsic?

So in conclusion my question is: How to convert the camera data into ARGB data fast, hardwareaccelerated?所以总而言之,我的问题是:如何将相机数据快速、硬件加速地转换为 ARGB 数据?

That's how to do it in Dalvik VM (found the code somewhere online, it works):这就是在 Dalvik VM 中执行此操作的方法(在网上某处找到了代码,它可以工作):

@SuppressWarnings("unused")
private void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {  
    final int frameSize = width * height;  
    for (int j = 0, yp = 0; j < height; j++) {
        int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;  
        for (int i = 0; i < width; i++, yp++) {  
            int y = (0xff & ((int) yuv420sp[yp])) - 16;  
            if (y < 0)
                y = 0;  
            if ((i & 1) == 0) {  
                v = (0xff & yuv420sp[uvp++]) - 128;  
                u = (0xff & yuv420sp[uvp++]) - 128;  
            }  
            int y1192 = 1192 * y;  
            int r = (y1192 + 1634 * v);  
            int g = (y1192 - 833 * v - 400 * u);  
            int b = (y1192 + 2066 * u);  
            if (r < 0)
                r = 0;
            else if (r > 262143)
                r = 262143;  
            if (g < 0)
                g = 0;
            else if (g > 262143)
                g = 262143;  
            if (b < 0)
                b = 0;
            else if (b > 262143)  
                b = 262143;  
            rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);  
        }
    }
}

I'm sure you will find the LivePreview test application interesting ... it's part of the Android source code in the latest Jelly Bean (MR1). 我相信你会发现LivePreview测试应用程序很有趣......它是最新Jelly Bean(MR1)中Android源代码的一部分。 It implements a camera preview and uses ScriptIntrinsicYuvToRgb to convert the preview data with Renderscript. 它实现了摄像头预览,并使用ScriptIntrinsicYuvToRgb通过Renderscript转换预览数据。 You can browse the source online here: 您可以在此浏览在线源:

LivePreview LivePreview

I was not able to get running ScriptInstrinsicYuvToRgb, so I decided to write my own RS solution. 我无法运行ScriptInstrinsicYuvToRgb,因此我决定编写自己的RS解决方案。

Here's ready script (named yuv.rs ): 这是现成的脚本(名为yuv.rs ):

#pragma version(1) 
#pragma rs java_package_name(com.package.name)

rs_allocation gIn;

int width;
int height;
int frameSize;

void yuvToRgb(const uchar *v_in, uchar4 *v_out, const void *usrData, uint32_t x, uint32_t y) {

    uchar yp = rsGetElementAtYuv_uchar_Y(gIn, x, y) & 0xFF;

    int index = frameSize + (x & (~1)) + (( y>>1) * width );
    int v = (int)( rsGetElementAt_uchar(gIn, index) & 0xFF ) -128;
    int u = (int)( rsGetElementAt_uchar(gIn, index+1) & 0xFF ) -128;

    int r = (int) (1.164f * yp  + 1.596f * v );
    int g = (int) (1.164f * yp  - 0.813f * v  - 0.391f * u);
    int b = (int) (1.164f * yp  + 2.018f * u );

    r = r>255? 255 : r<0 ? 0 : r;
    g = g>255? 255 : g<0 ? 0 : g;
    b = b>255? 255 : b<0 ? 0 : b;

    uchar4 res4;
    res4.r = (uchar)r;
    res4.g = (uchar)g;
    res4.b = (uchar)b;
    res4.a = 0xFF;

    *v_out = res4;
}

Don't forget to set camera preview format to NV21: 不要忘记将相机预览格式设置为NV21:

Parameters cameraParameters = camera.getParameters();
cameraParameters.setPreviewFormat(ImageFormat.NV21);
// Other camera init stuff: preview size, framerate, etc.
camera.setParameters(cameraParameters);

Allocations initialization and script usage: 分配初始化和脚本使用:

// Somewhere in initialization section 
// w and h are variables for selected camera preview size
rs = RenderScript.create(this); 

Type.Builder tbIn = new Type.Builder(rs, Element.U8(rs));
tbIn.setX(w);
tbIn.setY(h);
tbIn.setYuvFormat(ImageFormat.NV21);

Type.Builder tbOut = new Type.Builder(rs, Element.RGBA_8888(rs));
tbOut.setX(w); 
tbOut.setY(h);

inData = Allocation.createTyped(rs, tbIn.create(), Allocation.MipmapControl.MIPMAP_NONE,  Allocation.USAGE_SCRIPT & Allocation.USAGE_SHARED);
outData = Allocation.createTyped(rs, tbOut.create(), Allocation.MipmapControl.MIPMAP_NONE,  Allocation.USAGE_SCRIPT & Allocation.USAGE_SHARED);

outputBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);


yuvScript = new ScriptC_yuv(rs); 
yuvScript.set_gIn(inData);
yuvScript.set_width(w);
yuvScript.set_height(h);
yuvScript.set_frameSize(previewSize);
//.....

Camera callback method: 相机回调方法:

public void onPreviewFrame(byte[] data, Camera camera) {
    // In your camera callback, data 
    inData.copyFrom(data);
    yuvScript.forEach_yuvToRgb(inData, outData);
    outData.copyTo(outputBitmap);
    // draw your bitmap where you want to 
    // .....
}

We now have the new renderscript-intrinsics-replacement-toolkit to do it.我们现在有了新的renderscript-intrinsics-replacement-toolkit来完成它。 First, build and import the renderscript module to your project and add it as a dependency to your app module.首先,构建 renderscript 模块并将其导入到您的项目中,并将其作为依赖项添加到您的应用程序模块中。 Then, go to Toolkit.kt and add the following:然后,go 到 Toolkit.kt 并添加以下内容:

fun toNv21(image: Image): ByteArray? {
        val nv21 = ByteArray((image.width * image.height * 1.5f).toInt())
        return if (!nativeYuv420toNv21(
                nativeHandle,
                image.width,
                image.height,
                image.planes[0].buffer,  // Y buffer
                image.planes[1].buffer,  // U buffer
                image.planes[2].buffer,  // V buffer
                image.planes[0].pixelStride,  // Y pixel stride
                image.planes[1].pixelStride,  // U/V pixel stride
                image.planes[0].rowStride,  // Y row stride
                image.planes[1].rowStride,  // U/V row stride
                nv21
            )
        ) {
            null
        } else nv21
    }

private external fun nativeYuv420toNv21(
        nativeHandle: Long,
        imageWidth: Int,
        imageHeight: Int,
        yByteBuffer: ByteBuffer,
        uByteBuffer: ByteBuffer,
        vByteBuffer: ByteBuffer,
        yPixelStride: Int,
        uvPixelStride: Int,
        yRowStride: Int,
        uvRowStride: Int,
        nv21Output: ByteArray
    ): Boolean

Now, go to JniEntryPoints.cpp and add the following:现在,go 到 JniEntryPoints.cpp 并添加以下内容:

extern "C" JNIEXPORT jboolean JNICALL Java_com_google_android_renderscript_Toolkit_nativeYuv420toNv21(
        JNIEnv *env, jobject/*thiz*/, jlong native_handle,
        jint image_width, jint image_height, jobject y_byte_buffer,
        jobject u_byte_buffer, jobject v_byte_buffer, jint y_pixel_stride,
        jint uv_pixel_stride, jint y_row_stride, jint uv_row_stride,
        jbyteArray nv21_array) {

    auto y_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(y_byte_buffer));
    auto u_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(u_byte_buffer));
    auto v_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(v_byte_buffer));

    jbyte* nv21 = env->GetByteArrayElements(nv21_array, nullptr);
    if (nv21 == nullptr || y_buffer == nullptr || u_buffer == nullptr
        || v_buffer == nullptr) {
        // Log this.
        return false;
    }

    RenderScriptToolkit* toolkit = reinterpret_cast<RenderScriptToolkit*>(native_handle);
    toolkit->yuv420toNv21(image_width, image_height, y_buffer, u_buffer, v_buffer,
                 y_pixel_stride, uv_pixel_stride, y_row_stride, uv_row_stride,
                 nv21);

    env->ReleaseByteArrayElements(nv21_array, nv21, 0);
    return true;
}

Go to YuvToRgb.cpp and add the following: Go 到 YuvToRgb.cpp 并添加以下内容:

void RenderScriptToolkit::yuv420toNv21(int image_width, int image_height, const int8_t* y_buffer,
                  const int8_t* u_buffer, const int8_t* v_buffer, int y_pixel_stride,
                  int uv_pixel_stride, int y_row_stride, int uv_row_stride,
                  int8_t *nv21) {
    // Copy Y channel.
    for(int y = 0; y < image_height; ++y) {
        int destOffset = image_width * y;
        int yOffset = y * y_row_stride;
        memcpy(nv21 + destOffset, y_buffer + yOffset, image_width);
    }

    if (v_buffer - u_buffer == sizeof(int8_t)) {
        // format = nv21
        // TODO: If the format is VUVUVU & pixel stride == 1 we can simply the copy
        // with memcpy. In Android Camera2 I have mostly come across UVUVUV packaging
        // though.
    }

    // Copy UV Channel.
    int idUV = image_width * image_height;
    int uv_width = image_width / 2;
    int uv_height = image_height / 2;
    for(int y = 0; y < uv_height; ++y) {
        int uvOffset = y * uv_row_stride;
        for (int x = 0; x < uv_width; ++x) {
            int bufferIndex = uvOffset + (x * uv_pixel_stride);
            // V channel.
            nv21[idUV++] = v_buffer[bufferIndex];
            // U channel.
            nv21[idUV++] = u_buffer[bufferIndex];
        }
    }
}

Finally, go to RenderscriptToolkit.h and add the following:最后,go到RenderscriptToolkit.h并添加如下内容:

/**
     * https://blog.minhazav.dev/how-to-use-renderscript-to-convert-YUV_420_888-yuv-image-to-bitmap/#tobitmapimage-image-method
     * @param image_width width of the image you want to convert to byte array
     * @param image_height height of the image you want to convert to byte array
     * @param y_buffer Y buffer
     * @param u_buffer U buffer
     * @param v_buffer V buffer
     * @param y_pixel_stride Y pixel stride
     * @param uv_pixel_stride UV pixel stride
     * @param y_row_stride Y row stride
     * @param uv_row_stride UV row stride
     * @param nv21 the output byte array
     */
    void yuv420toNv21(int image_width, int image_height, const int8_t* y_buffer,
                 const int8_t* u_buffer, const int8_t* v_buffer, int y_pixel_stride,
                 int uv_pixel_stride, int y_row_stride, int uv_row_stride,
                 int8_t *nv21);

You are now ready to harness the full power of renderscript.您现在已准备好利用 renderscript 的全部功能。 Below, I am providing an example with the ARCore Camera Image object (replace the first line with whatever code gives you your camera image):下面,我提供了一个 ARCore Camera Image object 示例(将第一行替换为提供相机图像的任何代码):

val cameraImage = arFrame.frame.acquireCameraImage()
val width = cameraImage.width
val height = cameraImage.height
val byteArray = Toolkit.toNv21(cameraImage)
byteArray?.let {
Toolkit.yuvToRgbBitmap(
        byteArray,
        width,
        height,
        YuvFormat.NV21
).let { bitmap ->
        saveBitmapToDevice(
            name,
            session,
            bitmap,
            context
)}}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM