繁体   English   中英

使用 RenderScript 为纵向模式旋转 YUV 图像数据

[英]Rotating YUV image data for Portrait Mode Using RenderScript

对于视频图像处理项目,我必须旋转传入的 YUV 图像数据,以便数据不是水平显示而是垂直显示。 我使用了这个项目,它让我深入了解了如何将 YUV 图像数据转换为 ARGB 以实时处理它们。 该项目的唯一缺点是它仅在景观中。 没有纵向模式选项(我不知道为什么 Google 的人提供了一个仅处理横向的示例示例)。 我想改变这一点。

因此,我决定使用自定义的 YUV 转 RGB 脚本来旋转数据,使其以纵向模式显示。 以下 GIF 演示了应用程序如何在我应用任何旋转之前显示数据。

在此处输入图片说明

你必须知道,在 Android 中,即使设备处于纵向模式,YUV 图像数据也会以横向呈现(我在开始这个项目之前不知道。同样,我不明白为什么没有可用的方法可用于通过一次调用来旋转帧)。 这意味着即使设备处于纵向模式,起点也在左下角。 但是在纵向模式下,每一帧的起点应该在左上角。 我对字段使用矩阵表示法(例如 (0,0)、(0,1) 等)。 注意:我从这里拿了草图: 在此处输入图片说明

要旋转横向框架,我们必须重新组织字段。 这是我对草图(见上文)所做的映射,它显示了横向模式下的单帧yuv_420 映射应将框架旋转 90 度:

first column starting from the bottom-left corner and going upwards:
(0,0) -> (0,5)       // (0,0) should be at (0,5)
(0,1) -> (1,5)       // (0,1) should be at (1,5)
(0,2) -> (2,5)       // and so on ..
(0,3) -> (3,5)
(0,4) -> (4,5)
(0,5) -> (5,5)

2nd column starting at (1,0) and going upwards:
(1,0) -> (0,4)
(1,1) -> (1,4)
(1,2) -> (2,4)
(1,3) -> (3,4)
(1,4) -> (4,4)
(1,5) -> (5,4)

and so on...

事实上,发生的情况是第一列成为新的第一行,第二列成为新的第二行,依此类推。 从映射中可以看出,我们可以进行以下观察:

  • 结果的x坐标始终等于左侧的y坐标。 所以,我们可以说x = y
  • 我们总能观察到的是,对于结果的 y 坐标,以下等式必须成立: y = width - 1 - x (我对草图中的所有坐标进行了测试,这总是正确的)。

因此,我编写了以下渲染脚本内核函数:

#pragma version(1)
#pragma rs java_package_name(com.jon.condino.testing.renderscript)
#pragma rs_fp_relaxed

rs_allocation gCurrentFrame;
int width;

uchar4 __attribute__((kernel)) yuv2rgbFrames(uint32_t x,uint32_t y)
{

    uint32_t inX = y;             // 1st observation: set x=y
    uint32_t inY = width - 1 - x; // 2nd observation: the equation mentioned above

    // the remaining lines are just methods to retrieve the YUV pixel elements, converting them to RGB and outputting them as result 

    // Read in pixel values from latest frame - YUV color space
    // The functions rsGetElementAtYuv_uchar_? require API 18
    uchar4 curPixel;
    curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame, inX, inY);
    curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame, inX, inY);
    curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame, inX, inY);

    // uchar4 rsYuvToRGBA_uchar4(uchar y, uchar u, uchar v);
    // This function uses the NTSC formulae to convert YUV to RBG
    uchar4 out = rsYuvToRGBA_uchar4(curPixel.r, curPixel.g, curPixel.b);

    return out;
}

该方法似乎有效,但它有一个小错误,如下图所示。 如我们所见,相机预览处于纵向模式。 但是我的相机预览左侧有这条非常奇怪的颜色线。 为什么会这样? (请注意,我使用后置摄像头): 在此处输入图片说明

任何解决问题的建议都会很棒。 我从 2 周开始处理这个问题(YUV 从横向到纵向的旋转),这是迄今为止我自己能得到的最好的解决方案。 我希望有人可以帮助改进代码,以便左侧的奇怪颜色线也消失。

更新:

我在代码中所做的分配如下:

// yuvInAlloc will be the Allocation that will get the YUV image data
// from the camera
yuvInAlloc = createYuvIoInputAlloc(rs, x, y, ImageFormat.YUV_420_888);
yuvInAlloc.setOnBufferAvailableListener(this);

// here the createYuvIoInputAlloc() method
public Allocation createYuvIoInputAlloc(RenderScript rs, int x, int y, int yuvFormat) {
    return Allocation.createTyped(rs, createYuvType(rs, x, y, yuvFormat),
            Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
}

// the custom script will convert the YUV to RGBA and put it to this Allocation
rgbInAlloc = RsUtil.createRgbAlloc(rs, x, y);

// here the createRgbAlloc() method
public Allocation createRgbAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y));
}



// the allocation to which we put all the processed image data
rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs, x, y);

// here the createRgbIoOutputAlloc() method
public Allocation createRgbIoOutputAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y),
                Allocation.USAGE_IO_OUTPUT | Allocation.USAGE_SCRIPT);
}

其他一些辅助函数:

public Type createType(RenderScript rs, Element e, int x, int y) {
        if (Build.VERSION.SDK_INT >= 21) {
            return Type.createXY(rs, e, x, y);
        } else {
            return new Type.Builder(rs, e).setX(x).setY(y).create();
        }
    }

    @RequiresApi(18)
    public Type createYuvType(RenderScript rs, int x, int y, int yuvFormat) {
        boolean supported = yuvFormat == ImageFormat.NV21 || yuvFormat == ImageFormat.YV12;
        if (Build.VERSION.SDK_INT >= 19) {
            supported |= yuvFormat == ImageFormat.YUV_420_888;
        }
        if (!supported) {
            throw new IllegalArgumentException("invalid yuv format: " + yuvFormat);
        }
        return new Type.Builder(rs, createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat)
                .create();
    }

    public Element createYuvElement(RenderScript rs) {
        if (Build.VERSION.SDK_INT >= 19) {
            return Element.YUV(rs);
        } else {
            return Element.createPixel(rs, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);
        }
    }

调用自定义渲染脚本和分配:

// see below how the input size is determined
customYUVToRGBAConverter.invoke_setInputImageSize(x, y);
customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);

// receive some frames
yuvInAlloc.ioReceive();


// performs the conversion from the YUV to RGB
customYUVToRGBAConverter.forEach_convert(rgbInAlloc);

// this just do the frame manipulation , e.g. applying a particular filter
renderer.renderFrame(mRs, rgbInAlloc, rgbOutAlloc);


// send manipulated data to output stream
rgbOutAlloc.ioSend();

最后但最不重要的是,输入图像的大小。 您在上面看到的方法的 x 和 y 坐标基于此处表示为 mPreviewSize 的预览大小:

int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();
int totalRotation = sensorToDeviceRotation(cameraCharacteristics, deviceOrientation);
// determine if we are in portrait mode
boolean swapRotation = totalRotation == 90 || totalRotation == 270;
int rotatedWidth = width;
int rotatedHeigth = height;

// are we in portrait mode? If yes, then swap the values
if(swapRotation){
      rotatedWidth = height;
      rotatedHeigth = width;
}

// determine the preview size 
mPreviewSize = chooseOptimalSize(
                  map.getOutputSizes(SurfaceTexture.class),
                  rotatedWidth,
                  rotatedHeigth);

因此,在我的情况下, x将是mPreviewSize.getWidth()y将是mPreviewSize.getHeight()

请参阅我的YuvConverter 它的灵感来自android-Renderscript 将 NV12 yuv 转换为 RGB

它的rs部分非常简单:

#pragma version(1)
#pragma rs java_package_name(whatever)
#pragma rs_fp_relaxed

rs_allocation Yplane;
uint32_t Yline;
uint32_t UVline;
rs_allocation Uplane;
rs_allocation Vplane;
rs_allocation NV21;
uint32_t Width;
uint32_t Height;

uchar4 __attribute__((kernel)) YUV420toRGB(uint32_t x, uint32_t y)
{
    uchar Y = rsGetElementAt_uchar(Yplane, x + y * Yline);
    uchar V = rsGetElementAt_uchar(Vplane, (x & ~1) + y/2 * UVline);
    uchar U = rsGetElementAt_uchar(Uplane, (x & ~1) + y/2 * UVline);
    // https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion
    short R = Y + (512           + 1436 * V) / 1024; //             1.402
    short G = Y + (512 -  352 * U - 731 * V) / 1024; // -0.344136  -0.714136
    short B = Y + (512 + 1815 * U          ) / 1024; //  1.772
    if (R < 0) R == 0; else if (R > 255) R == 255;
    if (G < 0) G == 0; else if (G > 255) G == 255;
    if (B < 0) B == 0; else if (B > 255) B == 255;
    return (uchar4){R, G, B, 255};
}

uchar4 __attribute__((kernel)) YUV420toRGB_180(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Width - 1 - x, Height - 1 - y);
}

uchar4 __attribute__((kernel)) YUV420toRGB_90(uint32_t x, uint32_t y)
{
    return YUV420toRGB(y, Width - x - 1);
}

uchar4 __attribute__((kernel)) YUV420toRGB_270(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Height - 1 - y, x);
}

我的 Java 包装器在 Flutter 中使用过,但这并不重要:

public class YuvConverter implements AutoCloseable {

    private RenderScript rs;
    private ScriptC_yuv2rgb scriptC_yuv2rgb;
    private Bitmap bmp;

    YuvConverter(Context ctx, int ySize, int uvSize, int width, int height) {
        rs = RenderScript.create(ctx);
        scriptC_yuv2rgb = new ScriptC_yuv2rgb(rs);
        init(ySize, uvSize, width, height);
    }

    private Allocation allocY, allocU, allocV, allocOut;

    @Override
    public void close() {
        if (allocY != null) allocY.destroy();
        if (allocU != null) allocU.destroy();
        if (allocV != null) allocV.destroy();
        if (allocOut != null) allocOut.destroy();
        bmp = null;
        allocY = null;
        allocU = null;
        allocV = null;
        allocOut = null;
        scriptC_yuv2rgb.destroy();
        scriptC_yuv2rgb = null;
        rs = null;
    }

    private void init(int ySize, int uvSize, int width, int height) {
        if (bmp == null || bmp.getWidth() != width || bmp.getHeight() != height) {
            bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
            if (allocOut != null) allocOut.destroy();
            allocOut = null;
        }
        if (allocY == null || allocY.getBytesSize() != ySize) {
            if (allocY != null) allocY.destroy();
            Type.Builder yBuilder = new Type.Builder(rs, Element.U8(rs)).setX(ySize);
            allocY = Allocation.createTyped(rs, yBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocU == null || allocU.getBytesSize() != uvSize || allocV == null || allocV.getBytesSize() != uvSize ) {
            if (allocU != null) allocU.destroy();
            if (allocV != null) allocV.destroy();
            Type.Builder uvBuilder = new Type.Builder(rs, Element.U8(rs)).setX(uvSize);
            allocU = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
            allocV = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocOut == null || allocOut.getBytesSize() != width*height*4) {
            Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);
            if (allocOut != null) allocOut.destroy();
            allocOut = Allocation.createTyped(rs, rgbType, Allocation.USAGE_SCRIPT);
        }
    }

    @Retention(RetentionPolicy.SOURCE)
    // Enumerate valid values for this interface
    @IntDef({Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, Surface.ROTATION_270})
    // Create an interface for validating int types
    public @interface Rotation {}

    /**
     * Converts an YUV_420 image into Bitmap.
     * @param yPlane  byte[] of Y, with pixel stride 1
     * @param uPlane  byte[] of U, with pixel stride 2
     * @param vPlane  byte[] of V, with pixel stride 2
     * @param yLine   line stride of Y
     * @param uvLine  line stride of U and V
     * @param width   width of the output image (note that it is swapped with height for portrait rotation)
     * @param height  height of the output image
     * @param rotation  rotation to apply. ROTATION_90 is for portrait back-facing camera.
     * @return RGBA_8888 Bitmap image.
     */

    public Bitmap YUV420toRGB(byte[] yPlane, byte[] uPlane, byte[] vPlane,
                              int yLine, int uvLine, int width, int height,
                              @Rotation int rotation) {
        init(yPlane.length, uPlane.length, width, height);

        allocY.copyFrom(yPlane);
        allocU.copyFrom(uPlane);
        allocV.copyFrom(vPlane);
        scriptC_yuv2rgb.set_Width(width);
        scriptC_yuv2rgb.set_Height(height);
        scriptC_yuv2rgb.set_Yline(yLine);
        scriptC_yuv2rgb.set_UVline(uvLine);
        scriptC_yuv2rgb.set_Yplane(allocY);
        scriptC_yuv2rgb.set_Uplane(allocU);
        scriptC_yuv2rgb.set_Vplane(allocV);

        switch (rotation) {
            case Surface.ROTATION_0:
                scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);
                break;
            case Surface.ROTATION_90:
                scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);
                break;
            case Surface.ROTATION_180:
                scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);
                break;
            case Surface.ROTATION_270:
                scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);
                break;
        }

        allocOut.copyTo(bmp);
        return bmp;
    }
}

性能的关键是 renderscript 可以初始化一次(这就是YuvConverter.init()public的原因)并且以下调用非常快。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM