简体   繁体   English

OpenGL深度缓冲区行为不符合预期

[英]OpenGL Depth Buffer Behaving Not As Expected

I have been working on the beginnings of an engine for educational pursuit and I've run into an OpenGL concept which I THOUGHT I understood, however I cannot explain the behavior I've been observing. 我一直在从事教育追求引擎的开发工作,并且遇到了我理解的OpenGL概念,但是我无法解释我一直观察到的行为。 The issue is with the Depth Buffer. 问题出在深度缓冲区。 Also, understand that I have fixed the issue, and at the end of my post I will explain what fixed the problem, however I do not understand why the problem was fixed through my solution. 另外,请理解我已经解决了该问题,并且在帖子结尾我将解释解决该问题的原因,但是我不理解为什么通过我的解决方案解决该问题。 First I initialize GLUT & GLEW: 首先,我初始化GLUT&GLEW:

//Initialize openGL
glutInit(&argc, argv);
//Set display mode and window attributes
glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGB);

//Size and position attributes can be found in constants.h
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(WINDOW_XPOS, WINDOW_YPOS);
//Create window
glutCreateWindow("Gallagher");

// Initialize GLEW
glewExperimental = true;
glewInit();

//Initialize Graphics program
Initialize();

Then I initialize my program (Leaving segments out for readability and lack of relevance): 然后,我初始化程序(为了便于阅读和缺乏相关性,将段留在外面):

//Remove cursor
glutSetCursor(GLUT_CURSOR_NONE);

//Enable depth buffering
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDepthRange(0.0f, 1.0f)

//Set back color
glClearColor(0.0,0.0,0.0,1.0);

//Set scale and dimensions of orthographic viewport
//These values can be found in constants.h
//Program uses a right handed coordinate system.
glOrtho(X_LEFT, X_RIGHT, Y_DOWN, Y_UP, Z_NEAR, Z_FAR);

Anything beyond that point just includes initializing various engine components, loading the .obj files, and initializing instances of a ModularGameObject class, attaching the meshes to them, nothing that touches any relevant glut/glew. 超出这一点的任何事情都包括初始化各种引擎组件,加载.obj文件以及初始化ModularGameObject类的实例,将网格附加到它们,而这些都不会涉及任何相关的过剩/混乱。 However, before I go on it may be important to specify the following values: 但是,在继续之前,指定以下值可能很重要:

X_LEFT = -1000;
X_RIGHT = 1000;
Y_DOWN = -1000;
Y_UP = 1000;
Z_NEAR = -0.1;
Z_FAR = -1000;

Which causes my viewport to follow a right handed coordinate system. 这导致我的视​​口遵循右手坐标系。 The final segment of code which seems to be involved in the problem is my vertex shader: 问题的最后一部分似乎是我的顶点着色器:

#version 330 core
//Position of vertices in attribute 0
layout(location = 0) in vec4 _vertexPosition;
//Vertex Normals in attribute 1
layout(location = 1) in vec4 _vertexNormal;

//Model transformations
//Uniform location of model transformation matrix
uniform mat4 _modelTransformation;

//Uniform location of camera transformations
//Camera transformation matrix
uniform mat4 _cameraTransformation;
//Camera perspective matrix
uniform mat4 _cameraPerspective;

//Uniform location of inverse screen dimensions
//This is used because GLSL normalizes viewport from -1 to 1
//So any vector representing a position in screen space must be multiplied by this vector before display
uniform vec4 _inverseScreenDimensions;

//Output variables
//Indicates whether a vertex is valid or not, non valid vertices will not be drawn.
flat out int _valid;        // 0 = valid vertex
//Normal to be sent to fragment shader
smooth out vec4 _normal;

void main()
{
    //Initiate transformation pipeline

    //Transform to world space
    vec4 vertexInWorldSpace = vec4(_modelTransformation *_vertexPosition);

    //Transform to camera space
    vec4 vertexInCameraSpace = vec4(_cameraTransformation * vertexInWorldSpace);

    //Project to screen space
    vec4 vertexInScreenSpace = vec4(_cameraPerspective * vertexInCameraSpace);

    //Transform to device coordinates and store
    vec4 vertexInDeviceSpace = vec4(_inverseScreenDimensions * vertexInScreenSpace);
    //Store transformed vertex
    gl_Position = vertexInScreenSpace;

}

This code results in all transformations and normal calculations (not included) to be done correctly, however each face of my models constantly fighting to be above all the others. 这段代码可以正确完成所有转换和正常计算(不包括在内),但是模型的每个面都在不断超越其他面。 The only time I have no issues is standing inside of the first model being drawn, then nothing flickers and I can view the inside of Suzanne's head just as I should be able to. 我唯一没有问题的是站在绘制的第一个模型的内部,然后没有任何闪烁,并且我可以尽我所能地查看Suzanne头部的内部。

After weeks of trying anything I could possibly think of I finally worked my way to a solution which involves changing/adding a mere two lines of code. 经过数周的尝试,我可能会想到,我终于找到了解决方案,只需更改/添加两行代码即可。 First, I added this line to the end of my main function in my vertex shader: 首先,我将此行添加到顶点着色器中主函数的末尾:

gl_Position.z = 0.0001+vertexInScreenSpace.z;

The addition of this line of code caused every bit of z-fighting to go away, except now the depth buffer was completely backwards, vertices further away were being reliably drawn on top of vertices in front. 这行代码的添加导致z格斗的每一位都消失了,除了深度缓冲区完全向后,可靠地在前面的顶点之上绘制了更远的顶点。 This is my first question, why would this line of code cause more reliable behavior? 这是我的第一个问题,为什么这行代码会导致更可靠的行为?

Now that I had reliable behavior and no more depth-fighting it was a matter of reversing the draw order, and so I changed my call to glDepthRange to the following: 现在我有了可靠的行为,而不再需要进行深度战斗了,这就是反转绘制顺序的问题,因此我将对glDepthRange的调用更改为以下内容:

glDepthRange(1.0f, 0.0f);

I was under the assumption that glDepthRange(0.0f, 1.0f) would cause objects closer to my Z_NEAR (-0.1) to be closer to 0 and objects closer to my Z_FAR(-1000) to be closer to 1. Then, having my Depth Test set to GL_LESS would make perfect sense, matter of fact this should be the case regardless of what my Z_NEAR and Z_FAR are because of the way that glDepthRange maps the values, if I'm not mistaken. 我当时的假设是glDepthRange(0.0f,1.0f)会使靠近我的Z_NEAR(-0.1)的对象更接近0,而靠近我的Z_FAR(-1000)的对象更接近1。将深度测试设置为GL_LESS会很有意义,事实上,无论我的Z_NEAR和Z_FAR是什么,都是这样,因为glDepthRange映射值的方式(如果我没有记错的话)。

I must be mistaken though, because this line change would mean that objects closer to me would store a value closer to 1 in the depth buffer and objects further would have a value of 0 rendering a backwards draw order- but sure enough it works like a charm. 不过,我一定会弄错,因为此行的更改将意味着距离我更近的对象将在深度缓冲区中存储一个更接近1的值,并且对象进一步将具有0的值以呈现向后绘制顺序-但可以肯定它像魅力。

If anybody can point me in the direction of why my assumptions are wrong and what I could possibly be not factoring into my understanding of glsl and depth buffering. 如果有人能指出我为什么我的假设是错误的,以及我可能没有考虑到我对glsl和深度缓冲的理解的方向。 I would rather not move on with progress of my engine until I completely understand the functioning of it's foundation. 在我完全了解其基础功能之前,我宁愿不跟着我的引擎前进。

Edit: The contents of my _cameraPerspective matrix are as follows: Perspective matrix diagram 编辑:我的_cameraPerspective矩阵的内容如下:透视矩阵图

AspectX     0           0               0
0           AspectY     0               0
0           0           1               0
0           0         1/focalLength         0

Where AspectX is 16 and AspectY is 9. The focal length defaults to 70, however controls were added to change this during runtime. 其中AspectX为16,AspectY为9。焦距默认为70,但是添加了控件以在运行时更改此值。

Pointed out by derhass, this does not explain how any of the information passed to glOrtho() is taken into account by the shader. derhass指出,这不能解释着色器如何考虑传递给glOrtho()的任何信息。 The viewport dimensions, due to not using the standard pipeline & matrix stack, are considered with _inverseScreenDimensions. 由于未使用标准管线和矩阵堆栈,因此视口尺寸与_inverseScreenDimensions一起考虑。 This is a vec4 which contains [1/X_RIGHT, 1/Y_UP, 1/Z_Far, 1]. 这是一个vec4,包含[1 / X_RIGHT,1 / Y_UP,1 / Z_Far,1]。 Or for lack of variable names, [1/1000, 1/1000, -1/1000, 1]. 或缺少变量名[1 / 1000、1 / 1000,-1 / 1000、1]。

Multiplying the screen coordinates vector by this in my vertex shader results in an X value between -1 and 1, a Y value between -1 and 1, a Z value between 0 and 1 (If the object was in front of the camera it had a negative z coordinate to begin with), and a W of 1. 在我的顶点着色器中将屏幕坐标矢量与此相乘会导致X值介于-1和1之间,Y值介于-1和1之间,Z值介于0和1之间(如果对象在相机前面,则它具有开头的负z坐标),W为1。

If I'm not mistaken, this would be the final step to reach "device coordinates", followed by drawing the mesh. 如果我没记错的话,这将是到达“设备坐标”的最后一步,然后绘制网格。

Please keep in mind the original question: I know this is not streamlined, I know I'm not using GLM or all the most often used libraries for this, however my question is not "Hey guys fix this!" 请记住最初的问题:我知道这并没有简化,我知道我没有为此使用GLM或所有最常用的库,但是我的问题不是“嘿,解决这个问题!” My question is: Why was this fixed by the changes I made? 我的问题是:为什么我所做的更改可以解决此问题?

Near and Far are the distance of the near and far planes in the direction you are looking and therefore should usually both be positive numbers. Near和Far是您所看方向上的近平面和远平面的距离,因此通常都应为正数。 Negative numbers would put the clipping planes behind the view origin, this is probably not what you want. 负数会使裁剪平面位于视图原点之后,这可能不是您想要的。

Using the following matrix as projection matrix: 使用以下矩阵作为投影矩阵:

AspectX     0           0               0
0           AspectY     0               0
0           0           1               0
0           0         1/focalLength     0

is going to completely destroy the depth value. 将完全破坏深度值。

When this is applied to a vector (x,y,z,w)^T , you will get z'=z and w'=z/focalLength as the clip space components. 将其应用于向量(x,y,z,w)^T ,您将获得z'=zw'=z/focalLength作为剪辑空间分量。 After the perspecive divide, you will end up with a NDC z component of z'/w' which is just focaldepth and completely indepenent of the eye space z value . 经过透视划分后,您将得到z'/w'的NDC z分量,它只是focaldepth 并且完全不依赖于眼睛空间z值 So you project everything to the same depth which totally explains the behavior you have seen. 因此,您将所有内容投影到相同的深度,完全可以解释您所看到的行为。

This page explains how projection matrices are typically build and especially offers many details of how the z value is mapped. 该页面说明了投影矩阵通常是如何构建的,并且特别提供了有关如何映射z值的许多详细信息。

With the line gl_Position.z = 0.0001+vertexInScreenSpace.z; 与行gl_Position.z = 0.0001+vertexInScreenSpace.z; you actually get some kind of "working" depth since then, the NDC Z coord will be (0.0001+z')/w' which is focalLenght * (1+ 0.0001/z) and finally at least a function of eye space z, as it should be. 从那时起,您实际上得到了某种“工作”深度,因此NDC Z坐标将为(0.0001+z')/w' ,它是focalLenght * (1+ 0.0001/z) ,最后至少是眼睛空间z的函数,应该是。 One could caluclate what near and far values that mapping actually would procude, but carrying out that calculation is quite pointless for this answer. 可以计算出映射实际上将产生的nearfar值,但是执行该计算对于这一答案是毫无意义的。 You should make yourself familiar with the math for compuer graphic projections, especiallly linaer algebra and projective spaces. 您应该使自己熟悉计算机图形投影的数学,尤其是李纳尔代数和投影空间。

The reason why the depth test is inverted is due to the fact that your projection matrix does negate the z coordinates. 深度测试被反转的原因是由于您的投影矩阵确实抵消了z坐标。 Usually, the view matrix is constructed in such a way that the viewing direction is -z , and the projection matrix has (0 0 -1 0) as the last row, while you have (0 0 1/focalLength 0) , which basically multiplies z by -1 in effect. 通常,以如下方式构造视图矩阵:视图方向为-z ,并且投影矩阵的最后一行为(0 0 -1 0) ,而您的投影矩阵为(0 0 1/focalLength 0) ,基本上实际上将z乘以-1。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM