简体   繁体   English

如何在 Android RenderScript 中一次缩放、裁剪和旋转

[英]How to scale, crop, and rotate all at once in Android RenderScript

Is it possible to take a camera image in Y'UV format and using RenderScript:是否可以使用 Y'UV 格式并使用 RenderScript 拍摄相机图像:

  1. Convert it to RGBA将其转换为 RGBA
  2. Crop it to a certain region将其裁剪到某个区域
  3. Rotate it if necessary必要时旋转它

Yes!是的! I figured out how and thought I would share it with others.我想出了如何并认为我会与他人分享。 RenderScript has a bit of a learning curve, and more simple examples seem to help. RenderScript 有一点学习曲线,更简单的例子似乎有帮助。

When cropping, you still need to set up an input and output allocation as well as one for the script itself.裁剪时,您仍然需要设置输入和输出分配以及脚本本身的分配。 It might seem strange at first, but the input and output allocations have to be the same size so if you are cropping you need to set up yet another Allocation to write the cropped output.乍一看似乎很奇怪,但输入和输出分配必须具有相同的大小,因此如果您正在裁剪,则需要设置另一个分配来写入裁剪后的输出。 More on that in a second.稍后会详细介绍。

#pragma version(1)
#pragma rs java_package_name(com.autofrog.chrispvision)
#pragma rs_fp_relaxed

/*
 * This is mInputAllocation
 */
rs_allocation gInputFrame;

/*
 * This is where we write our cropped image
 */
rs_allocation gOutputFrame;

/*
 * These dimensions define the crop region that we want
 */
uint32_t xStart, yStart;
uint32_t outputWidth, outputHeight;

uchar4 __attribute__((kernel)) yuv2rgbFrames(uchar4 in, uint32_t x, uint32_t y)
{
    uchar Y = rsGetElementAtYuv_uchar_Y(gInputFrame, x, y);
    uchar U = rsGetElementAtYuv_uchar_U(gInputFrame, x, y);
    uchar V = rsGetElementAtYuv_uchar_V(gInputFrame, x, y);

    uchar4 rgba = rsYuvToRGBA_uchar4(Y, U, V);

    /* force the alpha channel to opaque - the conversion doesn't seem to do this */
    rgba.a = 0xFF;

    uint32_t translated_x = x - xStart;
    uint32_t translated_y = y - yStart;

    uint32_t x_rotated = outputWidth - translated_y;
    uint32_t y_rotated = translated_x;

    rsSetElementAt_uchar4(gOutputFrame, rgba, x_rotated, y_rotated);
    return rgba;
}

To set up the allocations:要设置分配:

private fun createAllocations(rs: RenderScript) {

    /*
     * The yuvTypeBuilder is for the input from the camera.  It has to be the
     * same size as the camera (preview) image
     */
    val yuvTypeBuilder = Type.Builder(rs, Element.YUV(rs))
    yuvTypeBuilder.setX(mImageSize.width)
    yuvTypeBuilder.setY(mImageSize.height)
    yuvTypeBuilder.setYuvFormat(ImageFormat.YUV_420_888)
    mInputAllocation = Allocation.createTyped(
        rs, yuvTypeBuilder.create(),
        Allocation.USAGE_IO_INPUT or Allocation.USAGE_SCRIPT)

    /*
     * The RGB type is also the same size as the input image.  Other examples write this as
     * an int but I don't see a reason why you wouldn't be more explicit about it to make
     * the code more readable.
     */
    val rgbType = Type.createXY(rs, Element.RGBA_8888(rs), mImageSize.width, mImageSize.height)

    mScriptAllocation = Allocation.createTyped(
        rs, rgbType,
        Allocation.USAGE_SCRIPT)

    mOutputAllocation = Allocation.createTyped(
        rs, rgbType,
        Allocation.USAGE_IO_OUTPUT or Allocation.USAGE_SCRIPT)

    /*
     * Finally, set up an allocation to which we will write our cropped image.  The
     * dimensions of this one are (wantx,wanty)
     */
    val rgbCroppedType = Type.createXY(rs, Element.RGBA_8888(rs), wantx, wanty)
    mOutputAllocationRGB = Allocation.createTyped(
        rs, rgbCroppedType,
        Allocation.USAGE_SCRIPT)
}

Finally, since you're cropping you need to tell the script what to do before invocation.最后,由于您正在裁剪,您需要告诉脚本在调用之前要做什么。 If the image sizes don't change you can probably optimize this by moving the LaunchOptions and variable settings so they occur just once (rather than every time) but I'm leaving them here for my example to make it clearer.如果图像大小没有改变,您可能可以通过移动 LaunchOptions 和变量设置来优化它,以便它们只出现一次(而不是每次),但我将它们留在这里作为示例以使其更清晰。

override fun onBufferAvailable(a: Allocation) {
    // Get the new frame into the input allocation
    mInputAllocation!!.ioReceive()

    // Run processing pass if we should send a frame
    val current = System.currentTimeMillis()
    if (current - mLastProcessed >= mFrameEveryMs) {
        val lo = Script.LaunchOptions()

        /*
         * These coordinates are the portion of the original image that we want to
         * include.  Because we're rotating (in this case) x and y are reversed
         * (but still offset from the actual center of each dimension)
         */

        lo.setX(starty, endy)
        lo.setY(startx, endx)

        mScriptHandle.set_xStart(lo.xStart.toLong())
        mScriptHandle.set_yStart(lo.yStart.toLong())

        mScriptHandle.set_outputWidth(wantx.toLong())
        mScriptHandle.set_outputHeight(wanty.toLong())

        mScriptHandle.forEach_yuv2rgbFrames(mScriptAllocation, mOutputAllocation, lo)

        val output = Bitmap.createBitmap(
            wantx, wanty,
            Bitmap.Config.ARGB_8888
        )

        mOutputAllocationRGB!!.copyTo(output)

        /* Do something with the resulting bitmap */
        listener?.invoke(output)

        mLastProcessed = current
    }
}

All this might seem like a bit much but it's very fast - way faster than doing the rotation on the java/kotlin side, and thanks to RenderScript's ability to run the kernel function over a subset of the image it's less overhead than creating a bitmap then creating a second, cropped one.所有这些可能看起来有点多,但它非常快 - 比在 java/kotlin 端进行旋转要快得多,而且由于 RenderScript 能够在图像的一个子集上运行内核函数,它的开销比创建位图少创建第二个,裁剪的。

For me, all the rotation is necessary because the image seen by the RenderScript was 90 degrees rotated from the camera.对我来说,所有的旋转都是必要的,因为 RenderScript 看到的图像是从相机旋转 90 度的。 I am told this is some kind of peculiarity of having a Samsung phone.有人告诉我,这是拥有三星手机的某种特殊性。

RenderScript was intimidating at first but once you get used to what it's doing it's not so bad. RenderScript 起初令人生畏,但是一旦您习惯了它的功能,它就不会那么糟糕。 I hope this is helpful to someone.我希望这对某人有帮助。

Thanks for this post. 感谢这篇文章。 I am new to renderscript and have little to no understanding of how YUV and RGB formats work. 我是renderscript的新手,对YUV和RGB格式的工作原理几乎没有了解。 I am using your code for converting YUV to RGB, rotating but no cropping (I am passing 0 as start_x and start_y and actual width and heigh assuming that will give me back whole image). 我正在使用您的代码将YUV转换为RGB,旋转但不进行裁剪(我将0用作start_x和start_y以及实际宽度和高度,假设这会给我整个图像)。 However, my application crashes with no stack and an error: 但是,我的应用程序崩溃时没有堆栈并且出现错误:

A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xb8 in tid 29575

I am wondering what am I missing. 我想知道我在想什么。 I am also little confused about the code you posted as I do not see any point where we are passing the received buffer in mInputAllocation into the renderscript. 我对您发布的代码也不太困惑,因为我看不到将mInputAllocation中的接收缓冲区传递到renderscript的任何地方。 So how does renderscript gets the input data? 那么renderscript如何获取输入数据? and how does it put it back in mOutputAllocation and mOutputAllocationRGB? 以及如何将其放回mOutputAllocation和mOutputAllocationRGB?

Could you please elaborate on that or point me to correct direction? 您能否对此进行详细说明或为我指明正确的方向?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM