简体   繁体   English

从深度传感器到OpenGL的z缓冲区的数据以实现遮挡

[英]Data from Depth Sensor to OpenGL's z-buffer to achieve occlusion

I would like to learn more about occlusion in augmented reality apps using the data from a depth sensor(eg kinect or Realsense RGB-D Dev kit). 我想使用来自深度传感器(例如kinect或Realsense RGB-D Dev工具包)的数据来了解有关增强现实应用程序中遮挡的更多信息。

I read that what one should do is to compare the z-buffer values of the rendered objects with the depth map values from the sensor and somehow mask the values so that only the pixels that are closer to the user will be seen.Does anyone have any resources or open source code that does this or could help me understand it? 我读到应该做的是将渲染对象的z缓冲区值与传感器的深度图值进行比较,并以某种方式掩盖这些值,以便仅看到离用户更近的像素。有任何资源或开放源代码可以这样做或可以帮助我理解它吗?

What is more,I want my hand(which I detect as a blob) always to occlude the virtual objects.Isn't there an easier option to do this? 而且,我希望我的手(被检测为斑点)始终遮住虚拟对象。难道没有更容易的选择吗?

You can upload the depth data as a texture and bind it as the depth buffer for the render target. 您可以将深度数据作为纹理上传,并将其绑定为渲染目标的深度缓冲区。

This requires matching the near and far planes of the projection matrix with the min and max values of the depth sensor. 这需要使投影矩阵的近平面和远平面与深度传感器的最小值和最大值匹配。

If the render target isn't the same size as the depth data then you can use sample it in the fragment shader and discard; 如果渲染目标与深度数据的大小不同,则可以在片段着色器中对其进行采样并discard; when it would be occluded. 什么时候会被遮挡。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM