简体   繁体   中英

Accessing iPhone Camera frame while using ARKit with Unity, converting camera.videoparams.cvPixelBuffer

I am currently trying to create a Unity project which uses both OpenCV and ARKit. I have OpenCV in order to perform some light-weight feature recognition I don't want to do through ARKit directly. I have both the ARKit app and the OpenCV app working separately; however, when used together, ARKit grabs the camera and I haven't yet figured out how to get the ARKit frame data to OpenCV for the feature recognition I have planned.

My current goal is to pipe the ARKit frame data using ARFrameUpdated method, with something like the below:

public void ARFrameUpdated(UnityARCamera camera)
{
    // Get the frame pixel buffer
    var cvPixBuf = camera.videoParams.cvPixelBufferPtr;

    // Somehow convert to BGRa and insert into OpenCV datastructure

    // Perform OpenCV related actions 
}

However, I am unsure how to convert camera.videoParams.cvPixelBufferPtr to something which I could use with OpenCV.

If anyone knows another approach which I could use to do this, that would also be appreciated.

Try creating a new camera, add the UnityARVideo to it and set its culling mask to nothing and have it render to a RenderTexture. It will render an unaugumented camera feed to the texture.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM