简体   繁体   English

访问ARCore中的图像数据字节

[英]Accessing Image Data Bytes in ARCore

I've created an ARCore Session and attached an OpenGL texture id through the Session#setCameraTextureName method to display my camera data. 我创建了一个ARCore Session并通过Session #setCameraTextureName方法附加了一个OpenGL纹理id来显示我的相机数据。 I'd like to have access to the camera image data bytes displayed on the texture. 我想访问纹理上显示的摄像机图像数据字节。

ARKit and Tango provide access to the image bytes for each frame but there doesn't seem to be anything that easily provides that in the ARCore API. ARKitTango提供对每个帧的图像字节的访问,但似乎在ARCore API中没有任何容易提供的东西。

Is there any other way I can access the image bytes when using ARCore? 在使用ARCore时,还有其他方法可以访问图像字节吗?

Maybe that could help you I wanted to obtain the camera view in a bitmap form. 也许这可以帮助你,我想以位图的形式获取相机视图。 I have tested on Samsung s8. 我在三星s8上测试过。

    int w=1080;
    int h = 2220;
    int b[]=new int[w*(0+h)];
    int bt[]=new int[w*h];
    IntBuffer ib = IntBuffer.wrap(b);
    ib.position(0);
    GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);

    for(int i=0, k=0; i<h; i++, k++)
    {//remember, that OpenGL bitmap is incompatible with Android bitmap
        //and so, some correction need.
        for(int j=0; j<w; j++)
        {
            int pix=b[i*w+j];
            int pb=(pix>>16)&0xff;
            int pr=(pix<<16)&0x00ff0000;
            int pix1=(pix&0xff00ff00) | pr | pb;
            bt[(h-k-1)*w+j]=pix1;
        }
    }

    sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);

For the time being, your best bet for accessing image data is probably drawing the texture to a renderbuffer and using glReadPixels into a persistent-mapped pixel unpack buffer. 目前,访问图像数据的最佳选择可能是将纹理绘制到渲染缓冲区并将glReadPixels用于持久映射像素解包缓冲区。 Use a fence sync to detect when the glReadPixels is complete. 使用fence同步来检测glReadPixels何时完成。

Another option is to use a compute shader and write directly to a persistent-mapped SSBO. 另一种选择是使用计算着色器并直接写入 持久映射的 SSBO。 (Disregard persistent-mapped suggestion. I thought EXT_buffer_storage had broader support) (忽略持久映射的建议。我认为EXT_buffer_storage有更广泛的支持)

The later is possibly fewer copies (the renderbuffer pixels may still hit DRAM even if you invalidate it after the glReadPixels), but it's also a less-common code path and incurs render/compute changeovers so I don't have intuition about which approach would be more efficient. 后者可能是更少的副本(即使你在glReadPixels之后使它无效,渲染缓冲像素仍然可以命中DRAM),但它也是一个不太常见的代码路径并导致渲染/计算转换,所以我没有直觉关于哪种方法会更有效率。

As of ARCore v1.1.0, there is an API to access the image bytes for the current frame: 从ARCore v1.1.0开始,有一个API可以访问当前帧的图像字节:

https://developers.google.com/ar/reference/java/com/google/ar/core/Frame.html#acquireCameraImage() https://developers.google.com/ar/reference/java/com/google/ar/core/Frame.html#acquireCameraImage()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM