简体   繁体   中英

Converting camera YUV-data to ARGB with renderscript

My Problem is: I've set up a camera in Android and receive the preview data by using an onPreviewFrame-listener which passes me an byte[] array containing the image data in the default android YUV-format (device does not support R5G6B5-format). Each pixel consists of 12bits which makes the thing a little tricky. Now what I want to do is converting the YUV-data into ARGB-data in order to do image processing with it. This has to be done with renderscript, in order to maintain a high performance.

My idea was to pass two pixels in one element (which would be 24bits = 3 bytes) and then return two ARGB pixels. The problem is, that in Renderscript a u8_3 (a 3dimensional 8bit vector) is stored in 32bit, which means that the last 8 bits are unused. But when copying the image data into the allocation all of the 32bits are used, so the last 8bit get lost. Even if I used a 32bit input data, the last 8bit are useless, because they're only 2/3 of a pixel. When defining an element consisting a 3-byte-array it actually has a real size of 3 bytes. But then the Allocation.copyFrom()-method doesn't fill the in-Allocation with data, argueing it doesn't has the right data type to be filled with a byte[].

The renderscript documentation states, that there is a ScriptIntrinsicYuvToRGB which should do exactly that in API Level 17. But in fact the class doesn't exist. I've downloaded API Level 17 even though it seems not to be downloadable any more. Does anyone have any information about it? Does anyone have ever tried out a ScriptIntrinsic?

So in conclusion my question is: How to convert the camera data into ARGB data fast, hardwareaccelerated?

That's how to do it in Dalvik VM (found the code somewhere online, it works):

@SuppressWarnings("unused")
private void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {  
    final int frameSize = width * height;  
    for (int j = 0, yp = 0; j < height; j++) {
        int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;  
        for (int i = 0; i < width; i++, yp++) {  
            int y = (0xff & ((int) yuv420sp[yp])) - 16;  
            if (y < 0)
                y = 0;  
            if ((i & 1) == 0) {  
                v = (0xff & yuv420sp[uvp++]) - 128;  
                u = (0xff & yuv420sp[uvp++]) - 128;  
            }  
            int y1192 = 1192 * y;  
            int r = (y1192 + 1634 * v);  
            int g = (y1192 - 833 * v - 400 * u);  
            int b = (y1192 + 2066 * u);  
            if (r < 0)
                r = 0;
            else if (r > 262143)
                r = 262143;  
            if (g < 0)
                g = 0;
            else if (g > 262143)
                g = 262143;  
            if (b < 0)
                b = 0;
            else if (b > 262143)  
                b = 262143;  
            rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);  
        }
    }
}

I'm sure you will find the LivePreview test application interesting ... it's part of the Android source code in the latest Jelly Bean (MR1). It implements a camera preview and uses ScriptIntrinsicYuvToRgb to convert the preview data with Renderscript. You can browse the source online here:

LivePreview

I was not able to get running ScriptInstrinsicYuvToRgb, so I decided to write my own RS solution.

Here's ready script (named yuv.rs ):

#pragma version(1) 
#pragma rs java_package_name(com.package.name)

rs_allocation gIn;

int width;
int height;
int frameSize;

void yuvToRgb(const uchar *v_in, uchar4 *v_out, const void *usrData, uint32_t x, uint32_t y) {

    uchar yp = rsGetElementAtYuv_uchar_Y(gIn, x, y) & 0xFF;

    int index = frameSize + (x & (~1)) + (( y>>1) * width );
    int v = (int)( rsGetElementAt_uchar(gIn, index) & 0xFF ) -128;
    int u = (int)( rsGetElementAt_uchar(gIn, index+1) & 0xFF ) -128;

    int r = (int) (1.164f * yp  + 1.596f * v );
    int g = (int) (1.164f * yp  - 0.813f * v  - 0.391f * u);
    int b = (int) (1.164f * yp  + 2.018f * u );

    r = r>255? 255 : r<0 ? 0 : r;
    g = g>255? 255 : g<0 ? 0 : g;
    b = b>255? 255 : b<0 ? 0 : b;

    uchar4 res4;
    res4.r = (uchar)r;
    res4.g = (uchar)g;
    res4.b = (uchar)b;
    res4.a = 0xFF;

    *v_out = res4;
}

Don't forget to set camera preview format to NV21:

Parameters cameraParameters = camera.getParameters();
cameraParameters.setPreviewFormat(ImageFormat.NV21);
// Other camera init stuff: preview size, framerate, etc.
camera.setParameters(cameraParameters);

Allocations initialization and script usage:

// Somewhere in initialization section 
// w and h are variables for selected camera preview size
rs = RenderScript.create(this); 

Type.Builder tbIn = new Type.Builder(rs, Element.U8(rs));
tbIn.setX(w);
tbIn.setY(h);
tbIn.setYuvFormat(ImageFormat.NV21);

Type.Builder tbOut = new Type.Builder(rs, Element.RGBA_8888(rs));
tbOut.setX(w); 
tbOut.setY(h);

inData = Allocation.createTyped(rs, tbIn.create(), Allocation.MipmapControl.MIPMAP_NONE,  Allocation.USAGE_SCRIPT & Allocation.USAGE_SHARED);
outData = Allocation.createTyped(rs, tbOut.create(), Allocation.MipmapControl.MIPMAP_NONE,  Allocation.USAGE_SCRIPT & Allocation.USAGE_SHARED);

outputBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);


yuvScript = new ScriptC_yuv(rs); 
yuvScript.set_gIn(inData);
yuvScript.set_width(w);
yuvScript.set_height(h);
yuvScript.set_frameSize(previewSize);
//.....

Camera callback method:

public void onPreviewFrame(byte[] data, Camera camera) {
    // In your camera callback, data 
    inData.copyFrom(data);
    yuvScript.forEach_yuvToRgb(inData, outData);
    outData.copyTo(outputBitmap);
    // draw your bitmap where you want to 
    // .....
}

We now have the new renderscript-intrinsics-replacement-toolkit to do it. First, build and import the renderscript module to your project and add it as a dependency to your app module. Then, go to Toolkit.kt and add the following:

fun toNv21(image: Image): ByteArray? {
        val nv21 = ByteArray((image.width * image.height * 1.5f).toInt())
        return if (!nativeYuv420toNv21(
                nativeHandle,
                image.width,
                image.height,
                image.planes[0].buffer,  // Y buffer
                image.planes[1].buffer,  // U buffer
                image.planes[2].buffer,  // V buffer
                image.planes[0].pixelStride,  // Y pixel stride
                image.planes[1].pixelStride,  // U/V pixel stride
                image.planes[0].rowStride,  // Y row stride
                image.planes[1].rowStride,  // U/V row stride
                nv21
            )
        ) {
            null
        } else nv21
    }

private external fun nativeYuv420toNv21(
        nativeHandle: Long,
        imageWidth: Int,
        imageHeight: Int,
        yByteBuffer: ByteBuffer,
        uByteBuffer: ByteBuffer,
        vByteBuffer: ByteBuffer,
        yPixelStride: Int,
        uvPixelStride: Int,
        yRowStride: Int,
        uvRowStride: Int,
        nv21Output: ByteArray
    ): Boolean

Now, go to JniEntryPoints.cpp and add the following:

extern "C" JNIEXPORT jboolean JNICALL Java_com_google_android_renderscript_Toolkit_nativeYuv420toNv21(
        JNIEnv *env, jobject/*thiz*/, jlong native_handle,
        jint image_width, jint image_height, jobject y_byte_buffer,
        jobject u_byte_buffer, jobject v_byte_buffer, jint y_pixel_stride,
        jint uv_pixel_stride, jint y_row_stride, jint uv_row_stride,
        jbyteArray nv21_array) {

    auto y_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(y_byte_buffer));
    auto u_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(u_byte_buffer));
    auto v_buffer = static_cast<jbyte*>(env->GetDirectBufferAddress(v_byte_buffer));

    jbyte* nv21 = env->GetByteArrayElements(nv21_array, nullptr);
    if (nv21 == nullptr || y_buffer == nullptr || u_buffer == nullptr
        || v_buffer == nullptr) {
        // Log this.
        return false;
    }

    RenderScriptToolkit* toolkit = reinterpret_cast<RenderScriptToolkit*>(native_handle);
    toolkit->yuv420toNv21(image_width, image_height, y_buffer, u_buffer, v_buffer,
                 y_pixel_stride, uv_pixel_stride, y_row_stride, uv_row_stride,
                 nv21);

    env->ReleaseByteArrayElements(nv21_array, nv21, 0);
    return true;
}

Go to YuvToRgb.cpp and add the following:

void RenderScriptToolkit::yuv420toNv21(int image_width, int image_height, const int8_t* y_buffer,
                  const int8_t* u_buffer, const int8_t* v_buffer, int y_pixel_stride,
                  int uv_pixel_stride, int y_row_stride, int uv_row_stride,
                  int8_t *nv21) {
    // Copy Y channel.
    for(int y = 0; y < image_height; ++y) {
        int destOffset = image_width * y;
        int yOffset = y * y_row_stride;
        memcpy(nv21 + destOffset, y_buffer + yOffset, image_width);
    }

    if (v_buffer - u_buffer == sizeof(int8_t)) {
        // format = nv21
        // TODO: If the format is VUVUVU & pixel stride == 1 we can simply the copy
        // with memcpy. In Android Camera2 I have mostly come across UVUVUV packaging
        // though.
    }

    // Copy UV Channel.
    int idUV = image_width * image_height;
    int uv_width = image_width / 2;
    int uv_height = image_height / 2;
    for(int y = 0; y < uv_height; ++y) {
        int uvOffset = y * uv_row_stride;
        for (int x = 0; x < uv_width; ++x) {
            int bufferIndex = uvOffset + (x * uv_pixel_stride);
            // V channel.
            nv21[idUV++] = v_buffer[bufferIndex];
            // U channel.
            nv21[idUV++] = u_buffer[bufferIndex];
        }
    }
}

Finally, go to RenderscriptToolkit.h and add the following:

/**
     * https://blog.minhazav.dev/how-to-use-renderscript-to-convert-YUV_420_888-yuv-image-to-bitmap/#tobitmapimage-image-method
     * @param image_width width of the image you want to convert to byte array
     * @param image_height height of the image you want to convert to byte array
     * @param y_buffer Y buffer
     * @param u_buffer U buffer
     * @param v_buffer V buffer
     * @param y_pixel_stride Y pixel stride
     * @param uv_pixel_stride UV pixel stride
     * @param y_row_stride Y row stride
     * @param uv_row_stride UV row stride
     * @param nv21 the output byte array
     */
    void yuv420toNv21(int image_width, int image_height, const int8_t* y_buffer,
                 const int8_t* u_buffer, const int8_t* v_buffer, int y_pixel_stride,
                 int uv_pixel_stride, int y_row_stride, int uv_row_stride,
                 int8_t *nv21);

You are now ready to harness the full power of renderscript. Below, I am providing an example with the ARCore Camera Image object (replace the first line with whatever code gives you your camera image):

val cameraImage = arFrame.frame.acquireCameraImage()
val width = cameraImage.width
val height = cameraImage.height
val byteArray = Toolkit.toNv21(cameraImage)
byteArray?.let {
Toolkit.yuvToRgbBitmap(
        byteArray,
        width,
        height,
        YuvFormat.NV21
).let { bitmap ->
        saveBitmapToDevice(
            name,
            session,
            bitmap,
            context
)}}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM