简体   繁体   English

在 opengl 阴影映射中的纹理中存储顶点深度信息

[英]storing vertex depth information in a texture in opengl shadow mapping

I'm currently programming shadow mapping (cascaded shadow mapping, to be precise) into my c++ opengl engine.我目前正在将阴影映射(准确地说是级联阴影映射)编程到我的 c++ opengl 引擎中。 I therefore want to have a texture containing the distance between my light source and every pixel in my shadow map.因此,我想要一个包含我的光源和我的阴影 map 中的每个像素之间的距离的纹理。 Which texture type should I use?我应该使用哪种纹理类型?

I saw that there is a GL_DEPTH_COMPONENT texture internal format, but it scales the data I want to give the texture to [0,1].我看到有一个 GL_DEPTH_COMPONENT 纹理内部格式,但是它将我想要给纹理的数据缩放到 [0,1]。 Should I invert my length once when I create the shadow map then a second time during my final rendering to get back the real length?我是否应该在创建阴影 map 时反转我的长度一次,然后在我的最终渲染期间再次反转我的长度以恢复真实长度? It seems quite useless!好像很没用!

Is there a way to use textures to store lengths without inverting them 2 times?有没有一种方法可以使用纹理来存储长度而不将它们反转 2 次? (once at the texture creation, once during its usage). (一次在纹理创建时,一次在其使用期间)。

I'm not sure what you mean with invert (I'm sure you cannot mean to invert the distance as this won't work).我不确定您对 invert 的意思(我确定您不能打算反转距离,因为这不起作用)。 What you do is transform the distance to the light source into the [0,1] range.您所做的是将与光源的距离转换为 [0,1] 范围。

This can be done by constructing a usual projection matrix for the light source's view and applying this to the vertices in the shadow map construction pass.这可以通过为光源视图构造一个通常的投影矩阵并将其应用于阴影 map 构造过程中的顶点来完成。 This way their distance to the light source is written into the depth buffer (to which you can connect a texture with GL_DEPTH_COMPONENT format either by glCopyTexSubImage or FBOs).这样,它们到光源的距离被写入深度缓冲区(您可以通过glCopyTexSubImage或 FBO 将具有GL_DEPTH_COMPONENT格式的纹理连接到该深度缓冲区)。 In the final pass you of course use the same projection matrix to compute the texture coordinates for the shadow map using projective texturing (using a sampler2DShadow sampler when using GLSL).在最后一遍中,您当然使用相同的投影矩阵来计算阴影 map 的纹理坐标,使用投影纹理(使用 GLSL 时使用sampler2DShadow采样器)。

But this transformation is not linear, as the depth buffer has a higher precision near the viewer (or light source in this case).但是这种转换不是线性的,因为深度缓冲区在观察者(或本例中的光源)附近具有更高的精度。 Another disadvantage is that you have to know the valid range of the distance values (the farthest point your light source affects).另一个缺点是您必须知道距离值的有效范围(光源影响的最远点)。 Using shaders (which I assume you do), you can make this transformation linear by just dividing the distance to the light source by this maximum distance and manually assign this to the fragment's depth value ( gl_FragDepth in GLSL), which is what you probably meant by "invert".使用着色器(我假设您这样做),您可以通过将到光源的距离除以该最大距离并手动将其分配给片段的深度值(GLSL 中的gl_FragDepth )来使这种转换成为线性的,这可能是您的意思通过“反转”。

The division (and knowledge of the maximum distance) can be prevented by using a floating point texture for the light distance and just writing the distance out as a color channel and then performing the depth comparison in the final pass yourself (using a normal sampler2D ).可以通过对光距离使用浮点纹理并将距离作为颜色通道写出来,然后自己在最终通道中执行深度比较(使用普通sampler2D )来防止划分(以及最大距离的知识) . But linearly filtering floating point textures is only supported on newer hardware and I'm not sure if this will be faster than a single division per fragment.但是线性过滤浮点纹理仅在较新的硬件上受支持,我不确定这是否会比每个片段的单个划分更快。 But the advantage of this way is, that this opens the path for things like "variance shadow maps", which won't work that good with normal ubyte textures (because of the low precision) and neither with depth textures.但是这种方式的优点是,这为诸如“方差阴影贴图”之类的东西开辟了道路,它不适用于普通的 ubyte 纹理(因为精度低),也不适用于深度纹理。

So to sum up, GL_DEPTH_COMPONENT is just a good compromise between ubyte textures (which lack the neccessary precision, as GL_DEPTH_COMPONENT should have at least 16bit precision) and float textures (which are not that fast or completely supported on older hardware).总而言之, GL_DEPTH_COMPONENT只是 ubyte 纹理(缺乏必要的精度,因为GL_DEPTH_COMPONENT至少应具有 16 位精度)和浮点纹理(在旧硬件上不那么快或完全支持)之间的一个很好的折衷。 But due to its fixed point format you won't get around a transformation into the [0,1]-range (be it linear of projective).但是由于它的定点格式,您将无法绕过到 [0,1] 范围的转换(无论是投影的线性)。 I'm not sure if floating point textures would be faster, as you only spare one division, but if you are on the newest hardware supporting linear (or even trilinear) filtering of float textures and 1 or 2 component float textures and render targets, it might be worth a try.我不确定浮点纹理是否会更快,因为您只保留一个分区,但如果您使用最新的硬件支持浮动纹理的线性(甚至三线性)过滤以及 1 或 2 个分量浮动纹理和渲染目标,可能值得一试。

Of course, if you are using the fixed function pipeline you have only GL_DEPTH_COMPONENT as an option, but regarding your question I assume you are using shaders.当然,如果您使用的是固定的 function 管道,则只有GL_DEPTH_COMPONENT作为选项,但关于您的问题,我假设您使用的是着色器。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM