简体   繁体   English

Android Camera2 API 显示处理后的预览图像

[英]Android Camera2 API Showing Processed Preview Image

New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me.新的 Camera 2 API 与旧的 API 非常不同。向管道的用户部分显示操纵的相机帧让我感到困惑。 I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear.我知道使用 Android L 和 Camera2 API 处理相机预览图像数据有很好的解释,但显示帧仍然不清楚。 My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?我的问题是在经过一些处理后在屏幕上显示来自 ImageReaders 回调函数的帧的方式是什么,同时保持 Camera2 api 管道的效率和速度?

Example Flow :示例流程:

camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?) camera.add_target(imagereader.getsurface) -> 在 imagereaders 回调上做一些处理 ->(在屏幕上显示处理过的图像?)

Workaround Idea : Sending bitmaps to imageview every time new frame processed.解决方法想法:每次处理新帧时都将位图发送到 imageview。

Edit after clarification of the question;澄清问题后编辑; original answer at bottom底部的原始答案

Depends on where you're doing your processing.取决于您在哪里进行处理。

If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface ), and then write your processed output to that Allocation and send it out with Allocation.ioSend().如果您使用的是 RenderScript,则可以将 SurfaceView 或 TextureView 中的 Surface 连接到 Allocation(使用setSurface ),然后将处理后的输出写入该 Allocation 并使用 Allocation.ioSend() 将其发送出去。 The HDR Viewfinder demo uses this approach. HDR 取景器演示使用这种方法。

If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface , with the Surface as the native_window argument.如果您正在进行基于 EGL 着色器的处理,您可以使用eglCreateWindowSurface将 Surface 连接到EGLSurface ,并将 Surface 作为 native_window 参数。 Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.然后你可以将你的最终输出渲染到那个 EGLSurface 并且当你调用 eglSwapBuffers 时,缓冲区将被发送到屏幕。

If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.如果您正在进行本机处理,则可以使用 NDK ANativeWindow 方法写入从 Java 传递的 Surface 并转换为 ANativeWindow。

If you're doing Java-level processing, that's really slow and you probably don't want to.如果您正在进行 Java 级别的处理,那真的很慢,而且您可能不想这样做。 But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.但是可以使用新的Android M ImageWriter类,或者每帧上传一个纹理到EGL。

Or as you say, draw to an ImageView every frame, but that'll be slow.或者如您所说,每帧都绘制到 ImageView 上,但这会很慢。


Original answer:原答案:

If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[] , and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.如果您正在捕获 JPEG 图像,您可以简单地将 ByteBuffer 的内容从Image.getPlanes()[0].getBuffer()byte[] ,然后使用BitmapFactory.decodeByteArray将其转换为 Bitmap。

If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from;如果您正在捕获 YUV_420_888 图像,那么您需要编写自己的转换代码,将 3-plane YCbCr 4:2:0 格式转换为可以显示的内容,例如用于创建位图的 RGB 值的 int[]; unfortunately there's not yet a convenient API for this.不幸的是,目前还没有一个方便的 API。

If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.如果您正在捕获 RAW_SENSOR 图像(拜耳模式未处理的传感器数据),那么您需要进行大量图像处理或仅保存 DNG。

I had the same need, and wanted a quick and dirty manipulation for a demo.我有同样的需求,并希望对演示进行快速而肮脏的操作。 I was not worried about efficient processing for a final product.我并不担心最终产品的高效处理。 This was easily achieved using the following java solution.这可以使用以下 java 解决方案轻松实现。

My original code to connect the camera2 preview to a TextureView was commented-out and replaced with a surface to an ImageReader:我将camera2预览连接到TextureView的原始代码被注释掉并替换为ImageReader的表面:

    // Get the surface of the TextureView on the layout
    //SurfaceTexture texture = mTextureView.getSurfaceTexture();
    //if (null == texture) {
    //    return;
    //}
    //texture.setDefaultBufferSize(mPreviewWidth, mPreviewHeight);
    //Surface surface = new Surface(texture);

    // Capture the preview to the memory reader instead of a UI element
    mPreviewReader = ImageReader.newInstance(mPreviewWidth, mPreviewHeight, ImageFormat.JPEG, 1);
    Surface surface = mPreviewReader.getSurface();

    // This part stays the same regardless of where we render
    mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
    mCaptureRequestBuilder.addTarget(surface);
    mCameraDevice.createCaptureSession(...

Then I registered a listener for the image:然后我为图像注册了一个监听器:

mPreviewReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = reader.acquireLatestImage();
        if (image != null) {
            Image.Plane plane = image.getPlanes()[0];
            ByteBuffer buffer = plane.getBuffer();
            byte[] bytes = new byte[buffer.capacity()];
            buffer.get(bytes);
            Bitmap preview = BitmapFactory.decodeByteArray(bytes, 0, buffer.capacity());
            image.close();
            if(preview != null ) {
                // This gets the canvas for the same mTextureView we would have connected to the
                // Camera2 preview directly above.
                Canvas canvas = mTextureView.lockCanvas();
                if (canvas != null) {
                    float[] colorTransform = {
                            0, 0, 0, 0, 0,
                            .35f, .45f, .25f, 0, 0,
                            0, 0, 0, 0, 0,
                            0, 0, 0, 1, 0};
                    ColorMatrix colorMatrix = new ColorMatrix();
                    colorMatrix.set(colorTransform); //Apply the monochrome green
                    ColorMatrixColorFilter colorFilter = new ColorMatrixColorFilter(colorMatrix);
                    Paint paint = new Paint();
                    paint.setColorFilter(colorFilter);
                    canvas.drawBitmap(preview, 0, 0, paint);
                    mTextureView.unlockCanvasAndPost(canvas);
                }
            }
        }
    }
}, mBackgroundPreviewHandler);

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM