简体   繁体   中英

Process every camera frame as Bitmap with OpenGL

I have an app, where I want to process every given frame from the camera to do some ARCore stuff. So I have a class implementing GLSurfaceView.Renderer , and in this class I have the onDrawFrame(GL10 gl) method. In this method, I want to work with an Android bitmap, so I call this code to get a bitmap from the current frame:

private Bitmap getTargetImageBitmapOpenGL(int cx, int cy, int w, int h) {
    try {

      if (currentTargetImageBitmap == null) {
        currentTargetImageBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);

        byteBuffer = ByteBuffer.allocateDirect(w * h * 4);
        byteBuffer.order(ByteOrder.nativeOrder());
      }

      // cy = height - cy;

      if ((cx + w / 2) > width) {
        Log.e(TAG, "TargetImage CenterPoint invalid A: " + cx + " " + cy);
        cx = width - w / 2;
      }

      if ((cx - w / 2) < 0) {
        Log.e(TAG, "TargetImage CenterPoint invalid B: " + cx + " " + cy);
        cx = w / 2;
      }

      if ((cy + h / 2) > height) {
        Log.e(TAG, "TargetImage CenterPoint invalid C: " + cx + " " + cy);
        cy = height - h / 2;
      }

      if ((cy - h / 2) < 0) {
        Log.e(TAG, "TargetImage CenterPoint invalid D: " + cx + " " + cy);
        cy = h / 2;
      }

      int x = cx - w / 2;
      int y = cy - h / 2;

      GLES20.glReadPixels(x, y, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
          byteBuffer);

      IntBuffer currentTargetImagebuffer = byteBuffer.asIntBuffer();

      currentTargetImagebuffer.rewind();
      currentTargetImageBitmap.copyPixelsFromBuffer(currentTargetImagebuffer);

      return currentTargetImageBitmap;

    } catch (Exception e) {
      e.printStackTrace();
    }

    return null;
  }

This method takes around 90 ms, which is definitely too slow to process every incoming frame in realtime, which I need to do because the onDrawFrame(GL10 gl) method also draws this frame to a surface view. Any idea why this is so slow? It would also suffice if I could only read the pixels of every other frame, but draw every frame to my SurfaceView. I tried to call the shown method in AsyncTask.execute() , but another thread cannot read via the GLES20.glReadPixels() method, since it is not the GL thread.

A lot of the modern GPUs can decode YUV natively; the issue is how to get the YUV surface into OpenGL ES as this is not normally something which Open GL ES does. Most operating systems (Android included) let you import external surfaces directly into OpenGL ES via the EGL_image_external extension, and these external surfaces can be marked up a being YUV with automatic color conversion.

Even better this is all handled zero-copy; the camera buffer can be directly imported and accessed by the GPU.

This Android mechanism for importing is via the SurfaceTexture class, and the necessary usage is described here: https://source.android.com/devices/graphics/arch-st

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM