[英]Adaptive depth bias for texture sampling
I have a complex 3D scenes, the values in my depth buffer ranges from close shot, several centimeters, to several kilometers.我有一个复杂的 3D 场景,我的深度缓冲区中的值范围从近距离,几厘米到几公里。
For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow).对于一些不同的效果,我使用深度偏差、偏移来规避一些伪影(SSAO、Shadow)。 Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur.
即使在通过比较当前剥离和前一次剥离的深度进行深度剥离的过程中,也会出现一些问题。
I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.我已经解决了近距离拍摄的这些问题,但是当片段足够远时,偏差就变得过时了。
I am wondering how to tackle the bias for such scenes.我想知道如何解决这种场景的偏见。 Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth?
取决于当前像素的当前世界深度,或者可能完全禁用给定深度的效果?
Is there some good practices regarding those issues, and how can I address them?关于这些问题是否有一些好的做法,我该如何解决?
It seems I found a way,好像找到了办法
I have sound this link for shadow bias https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c26145d7.htm我有声音这个链接阴影偏差https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c261t45d7。
Depth bias and normal offset values are specified in shadow map texels.
深度偏差和法线偏移值在阴影 map 纹素中指定。 For example, depth bias = 3 means that the pixels is moved the length of 3 shadows map texels closer to the light.
例如,深度偏差 = 3 意味着像素移动了 3 个阴影 map 纹素的长度,靠近光。
By keeping the bias proportional to the projected shadow map texels, the same settings work at all distances.
通过保持与投影阴影 map 纹素成比例的偏差,相同的设置适用于所有距离。
I use the difference in world space between the current point and a neighboring pixel with the same depth component.我使用当前点和具有相同深度分量的相邻像素之间的世界空间差异。 the bias become something close to "the average distance between 2 neighboring pixels".
偏差接近“两个相邻像素之间的平均距离”。 The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).
像素越远,偏差就越大(从接近近平面的几毫米到远平面的米)。
So for each of my sampling point, I offset its position from some pixels in its x direction (3 pixels give me good results on various scenes).因此,对于我的每个采样点,我将其 position 从其 x 方向上的一些像素偏移(3 个像素在各种场景中给了我很好的结果)。
I compute the world difference between the currentPoint and this new offsetedPoint我计算 currentPoint 和这个新的 offsetedPoint 之间的世界差异
I use this difference as a bias for all my depth testing我使用这种差异作为我所有深度测试的偏差
code代码
float compute_depth_offset() {
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec2 nextPixel = vec2(gl_FragCoord.xy + vec2(depth_transparency_bias, 0.0)) / dim;
vec4 currentNDC;
vec4 nextNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - depth_range.near - depth_range.far) / (depth_range.far - depth_range.near);
currentNDC.w = 1.0;
nextNDC.xy = nextPixel * 2.0 - 1.0;
nextNDC.z = currentNDC.z;
nextNDC.w = currentNDC.w;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec4 nextWorld = (inv_mvp * nextNDC);
nextWorld.xyz = nextWorld.xyz / nextWorld.w;
return length(nextWorld.xyz - world.xyz);
}
recently I used only the world space derivative of the current pixels position:最近我只使用了当前像素position的世界空间导数:
float compute_depth_offset(float zNear, float zFar)
{
mat4 mvp = projection * modelView;
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec4 currentNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - 0.0 - 1.0) / (1.0 - 0.0);
currentNDC.w = 1.0;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec3 depth = max(abs(dFdx(world.xyz)), abs(dFdy(world.xyz)));
return depth.x + depth.y + depth.z;
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.