简体   繁体   English

从片段着色器访问32位深度缓冲区?

[英]Accessing 32-bit Depth Buffer from fragment shader?

I'm trying to do the following technique for shadowing, I read it on the NVIDIA site and it seemed like a good technique. 我正在尝试进行以下阴影处理技术,我在NVIDIA网站上阅读了它,这似乎是一种不错的技术。 I would prefer it to calculating shadow volumes on the cpu because it seems more 'true' and I could use this one for soft-shadowing. 我宁愿选择它来计算cpu上的阴影体积,因为它看起来更“真实”,我可以将其用于软阴影。 :

1st pass: 第一关:

  • Fill depth buffer from perspective of LIGHT0. 从LIGHT0角度填充深度缓冲区。 Copy this depth buffer for second pass. 复制此深度缓冲区以进行第二遍。 (*) (*)

2nd pass: 第二遍:

  • Render view from EYE, and for each fragment: 从EYE渲染视图,并针对每个片段:
    • Get the XY location in the depth buffer stored in (*). 获取存储在(*)中的深度缓冲区中的XY位置。 Get the corresponding 32-bit value. 获取相应的32位值。
    • Calculate the distance to the light. 计算到光的距离。
    • Match this distance to the stored depth buffer value. 将此距离与存储的深度缓冲区值匹配。
    • If it is larger, then the fragment is drawn in glDisable(LIGHT0) mode, otherwise it is drawn with the light enabled. 如果较大,则在glDisable(LIGHT0)模式下绘制片段,否则在启用光源的情况下绘制片段。 For this purpose I use two fragment shaders and fragments blend/switch between the two according to the comparison of the distance. 为此,我使用了两个片段着色器,并根据距离的比较在两个片段之间混合/切换了片段。

Now, I'd want to do the last steps in the fragment shader, for some reasons. 现在,由于某些原因,我想执行片段着色器的最后步骤。 One of them is that I want to take the distance into account for the 'effect" of the shadowing. In my game, if the distance to the obstructing object is small, it is safe to say that the shadow will be very "strict". If it is further away, global illumination kicks in more and the shadow is slighter. This is because it's a card game, this would not be the case for more complicated 'concave' shapes. 其中之一是我要考虑阴影的“效果”的距离,在我的游戏中,如果到障碍物的距离很小,可以肯定地说阴影将非常“严格”。如果距离较远,全局照明会增加,阴影也会更暗,这是因为它是纸牌游戏,对于更复杂的“凹形”形状则不是这种情况。

But, I'm new to openGL and I don't understand how to do any of the following: 但是,我是openGL的新手,但我不知道如何执行以下任一操作:

  • How to access that 1st pass depth buffer in the fragment shader without copying it to a 2d texture. 如何在片段着色器中访问该第一遍深度缓冲区而不将其复制到2d纹理。 I assume that is not possible? 我认为不可能吗?
  • If copying the 32-bit depth buffer to a texture that has 8-bits in each R,G,B,A component and then re-assembling that value in the fragment shader is the most efficient thing I can do? 如果将32位深度缓冲区复制到每个R,G,B,A组件中具有8位的纹理,然后在片段着色器中重新组合该值是我能做的最有效的事情?
  • If there are cross-platform extensions I could use for this. 如果有跨平台扩展,我可以使用它。

Thanks if someone can help me or give me some more ideas, I'm kind of stumped right now and my lack of good hardware and free time really makes for an exhausting process to debug/try everything out. 谢谢您,如果有人可以帮助我或给我更多想法,我现在有点迷惑,而且我缺乏良好的硬件和空闲时间的确使调试/尝试所有工作变得筋疲力尽。

1st way is to use FBOs with a GL_DEPTH_COMPONENT texture attached to the GL_DEPTH_ATTACHMENT attachment. 第一种方法是使用在GL_DEPTH_ATTACHMENT附件中附加了GL_DEPTH_COMPONENT纹理的FBO。

The second way is to use glCopyTexImage2D again with a GL_DEPTH_COMPONENT texture. 第二种方法是再次使用带有GL_DEPTH_COMPONENT纹理的glCopyTexImage2D。

FBOs are cross platform and available in almost every modern OpenGL implementation, you should have them available to you. FBO是跨平台的,几乎可以在每个现代OpenGL实现中使用,因此您应该可以使用它们。

You are right: you need to create a 2D texture from the depth buffer values in order to use those values in the 2nd pass. 没错:您需要根据深度缓冲区值创建2D纹理,以便在第二遍中使用这些值。

Concerning the texture itself, I think that copying from 32bits depth buffer to 8 bits RGBA will not use a cast to convert data: for a mid range value of the depth buffer (say 0x80000000), you will get half tone on R, G, B and A on your rgba texture: 关于本身的质感,我觉得从32位深度缓冲区拷贝至8位RGBA不会使用强制转换数据:为深度缓冲的中间范围值(比方说为0x80000000),您将获得R,G半色调, B和A在您的rgba纹理上:

RGBA[0] = 0x80;
RGBA[1] = 0x80;
RGBA[2] = 0x80;
RGBA[3] = 0x80;

Where you would have expected: (cast) 您期望的位置:(广播)

RGBA[0] = 0x80;
RGBA[1] = 0;
RGBA[2] = 0;
RGBA[3] = 0;

So, for the right format, I am not sure, but I would suggest you not to modify it during the copy, since you don't want to have a conversion overhead. 因此,对于哪种格式正确,我不确定,但是我建议您在复制期间不要修改它,因为您不希望有转换开销。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM