简体   繁体   中英

Android Fast YUV420P to ARGB8888 Conversion

I am using Android's MediaCodec API to decode H.264 encoded video data coming in from the network to do live streaming.

I am aware that I can render the video to a view directly by configuring the decoder to operate in the Surface mode.

However, by doing this, the output video exhibits color banding on certain platforms. The target platform with this issue is Android-x86 6.0 running on a Intel Celeron N2930 processor.

After endless trials to fix the problem, I decided to use the decoder in what is known as the ByteBuffer mode. In this mode, I receive decoded video frames in ByteBuffers which store the color values of the frames' pixels.

The output video frame is in YUV420p for my target platform that I mentioned above. To display the video frame onto a view, I must first convert it to a Bitmap. Before creating the bitmap, I should first convert the frame color format to ARGB8888.

Initially I did this conversion in Java, but the process is too slow to be any useful for live streaming, about 300+ ms. Then I did the conversion using native C code. Managed to cut the time to about half, but still it is too slow for at least 20 fps on a 1920x1080 video.

JNIEXPORT jintArray JNICALL
Java_my_package_name_class_convertYUV420PToARGB8888_1Native2(
        JNIEnv *env, jclass type, jbyteArray data_, jint width, jint height) {
    jbyte *data = (*env)->GetByteArrayElements(env, data_, NULL);
    if (!data) {
        __android_log_print(ANDROID_LOG_ERROR, TAG,
        return NULL;
    }

    const jint frameSize = width * height;
    const jint offset_u = frameSize;
    const jint offset_v = frameSize + frameSize / 4;

    jint *pixels = malloc(sizeof(jint) * frameSize);
    jint u, v, y1, y2, y3, y4;

    // i percorre os Y and the final pixels
    // k percorre os pixles U e V
    jint i;
    jint k;
    for (i = 0, k = 0; i < frameSize; i += 2, k += 1) {
        // process 2*2 pixels in one iteration
        y1 = data[i] & 0xff;
        y2 = data[i + 1] & 0xff;
        y3 = data[offset + i] & 0xff;
        y4 = data[width + i + 1] & 0xff;

        u = data[offset_u + k] & 0xff;
        v = data[offset_v + k] & 0xff;
        u = u - 128;
        v = v - 128;

        pixels[i] = convertYUVtoRGB(y1, u, v);
        pixels[i + 1] = convertYUVtoRGB(y2, u, v);
        pixels[width + i] = convertYUVtoRGB(y3, u, v);
        pixels[width + i + 1] = convertYUVtoRGB(y4, u, v);

        if (i != 0 && (i + 2) % width == 0) {
            i += width;
        }
    }

    jintArray result = (*env)->NewIntArray(env, frameSize);
    if (!result) {
        return NULL;
    }

    (*env)->SetIntArrayRegion(env, /* env */
                              result, /* array */
                              0, /* start */
                              frameSize, /* len */
                              pixels /* buf */
    );

    // free resources
    free(pixels);
    (*env)->ReleaseByteArrayElements(env,
                                     data_, /* array */
                                     data, /* elems */
                                     JNI_ABORT /* mode */
    );

    return result;
}

static jint convertYUVtoRGB(jint y, jint u, jint v) {
    jint r, g, b;

    r = y + (int) 1.402f * v;
    g = y - (int) (0.344f * u + 0.714f * v);
    b = y + (int) 1.772f * u;

    r = r > 255 ? 255 : r < 0 ? 0 : r;
    g = g > 255 ? 255 : g < 0 ? 0 : g;
    b = b > 255 ? 255 : b < 0 ? 0 : b;

    return 0xff000000 | (r << 16) | (g << 8) | b;
}

This function, which I obtained from somewhere here in SO, performs in blocks of 2x2 pixels in one iteration.

How can I speed up this process? Any libraries? Other methods?

Thank you in advance.

通过使用ffmpeg的libswscale库设法解决了这个问题。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM