简体   繁体   中英

Setting gl_FragDepth causes lag?

I designed a very basic set of depth shaders for rendering depth to my shadow map depth textures. This is the code for the depth fragment shader that I used :

#version 330 core

in vec3 FragPos;
uniform vec3 lightPos;
uniform float farPlane;

void main()
{
    float depth = length(FragPos - lightPos);
    depth /= farPlane;
    gl_FragDepth = depth;
}

This code isn't much, it simply calculates the distance between a fragment and the light source, normalizes the value by dividing it by the light's far plane distance, and sets gl_FragDepth as the result.

The code works without any errors. I was testing the renderer with just two objects in the scene and one point light source. Later, I pulled a big interior scene, and the FPS dropped from somewhere between 60-70, down to 30-40.

I tried doing some GPU profiling with Nvidia Nsights, and discovered that the glDrawElements for my shadow pass was spending 4 ms. I zeroed down the problem to the final line of code in the fragment shader written above, gl_FragDepth = depth .

What i found out was that if I removed the expression gl_FragDepth = depth, the FPS jumped to 70s, with the draw call taking just 1 ms. Note that, everything else was untouched.

How could setting the gl_FragDepth value, cause low performance?

Writing to gl_FragDepth will disable early fragment tests :

Therefore, an implementation is free to apply early fragment tests if the Fragment Shader being used does not do anything that would impact the results of those tests. So if a fragment shader writes to gl_FragDepth, thus changing the fragment's depth value, then early testing cannot take place, since the test must use the new computed value.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM