简体   繁体   中英

Adaptive depth bias for texture sampling

I have a complex 3D scenes, the values in my depth buffer ranges from close shot, several centimeters, to several kilometers.

For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow). Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur.

I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.

I am wondering how to tackle the bias for such scenes. Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth?

Is there some good practices regarding those issues, and how can I address them?在此处输入图像描述

It seems I found a way,

I have sound this link for shadow bias https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c26145d7.htm

Depth bias and normal offset values are specified in shadow map texels. For example, depth bias = 3 means that the pixels is moved the length of 3 shadows map texels closer to the light.

By keeping the bias proportional to the projected shadow map texels, the same settings work at all distances.

I use the difference in world space between the current point and a neighboring pixel with the same depth component. the bias become something close to "the average distance between 2 neighboring pixels". The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).

  • So for each of my sampling point, I offset its position from some pixels in its x direction (3 pixels give me good results on various scenes).

  • I compute the world difference between the currentPoint and this new offsetedPoint

  • I use this difference as a bias for all my depth testing

code

float compute_depth_offset() {

    mat4 inv_mvp = inverse(mvp);
    vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
    vec2 nextPixel = vec2(gl_FragCoord.xy + vec2(depth_transparency_bias, 0.0)) / dim;

    vec4 currentNDC;
    vec4 nextNDC;

    currentNDC.xy = currentPixel * 2.0 - 1.0;
    currentNDC.z = (2.0 * gl_FragCoord.z - depth_range.near - depth_range.far) / (depth_range.far - depth_range.near);
    currentNDC.w = 1.0;

    nextNDC.xy = nextPixel * 2.0 - 1.0;
    nextNDC.z = currentNDC.z;
    nextNDC.w = currentNDC.w;

    vec4 world = (inv_mvp * currentNDC);
    world.xyz = world.xyz / world.w;

    vec4 nextWorld = (inv_mvp * nextNDC);
    nextWorld.xyz = nextWorld.xyz / nextWorld.w;

    return length(nextWorld.xyz - world.xyz);
}

recently I used only the world space derivative of the current pixels position:

float compute_depth_offset(float zNear, float zFar)
{
    mat4 mvp = projection * modelView;
    mat4 inv_mvp = inverse(mvp);
    vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
    vec4 currentNDC;

    currentNDC.xy = currentPixel * 2.0 - 1.0;
    currentNDC.z = (2.0 * gl_FragCoord.z - 0.0 - 1.0) / (1.0 - 0.0);
    currentNDC.w = 1.0;

    vec4 world = (inv_mvp * currentNDC);
    world.xyz = world.xyz / world.w;

    vec3 depth = max(abs(dFdx(world.xyz)), abs(dFdy(world.xyz)));

    return depth.x + depth.y + depth.z; 
}

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM