简体   繁体   中英

Android camera frame processing using OpenGL

I am trying to apply face detection on camera preview frames. I am using OpenGL and OpenCV to process these camera frames at run-time.

@Override
    public void onDrawFrame(GL10 unused) {
        if (VERBOSE) {
            Log.d(TAG, "onDrawFrame tex=" + mTextureId);
        }

        mSurfaceTexture.updateTexImage();
        mSurfaceTexture.getTransformMatrix(mSTMatrix);

        // TODO: need to implement
        //JniCppManager.processFrame();

        drawFrame(mTextureId, mSTMatrix);
}

I am trying to implement a c++ implementation of processFrame(). How can I get a Mat object in c++ from transformation matrix? Could anyone provide me some pointers to the solution.

Your pipeline is currently:

  • Camera (produces frame)
  • SurfaceTexture (receives frame, converts to GLES "external" texture)
  • [missing stuff]
  • Array of RGB bytes passed to C++

What you need to do for [missing stuff] is render the pixels to an off-screen pbuffer and read them back with glReadPixels() . You can do this from code written in Java or native; for the former you'd want to read them into a "direct" ByteBuffer so you can easily access the pixels from native code. The EGL context used by GLES is held in thread-local storage, so the native code running on the GLSurfaceView render thread will be able to access it.

An example of this can be found in the bigflake ExtractMpegFramesTest , which differs primarily in that it's grabbing frames from a video rather than a Camera.

For API 19+, if you can process frames in YV12 or NV21 rather than RGB, you can feed the Camera to an ImageReader and get access to the data without having to copy/convert it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM