简体   繁体   中英

Data from Depth Sensor to OpenGL's z-buffer to achieve occlusion

I would like to learn more about occlusion in augmented reality apps using the data from a depth sensor(eg kinect or Realsense RGB-D Dev kit).

I read that what one should do is to compare the z-buffer values of the rendered objects with the depth map values from the sensor and somehow mask the values so that only the pixels that are closer to the user will be seen.Does anyone have any resources or open source code that does this or could help me understand it?

What is more,I want my hand(which I detect as a blob) always to occlude the virtual objects.Isn't there an easier option to do this?

You can upload the depth data as a texture and bind it as the depth buffer for the render target.

This requires matching the near and far planes of the projection matrix with the min and max values of the depth sensor.

If the render target isn't the same size as the depth data then you can use sample it in the fragment shader and discard; when it would be occluded.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM