简体   繁体   English

如何在iOS中正确地线性化OpenGL ES中的深度?

[英]How to correctly linearize depth in OpenGL ES in iOS?

I'm trying to render a forrest scene for an iOS App with OpenGL. 我正在尝试使用OpenGL为iOS应用程序渲染forrest场景。 To make it a little bit nicer, I'd like to implement a depth effect into the scene. 为了使它更好一些,我想在场景中实现深度效果。 However I need a linearized depth value from the OpenGL depth buffer to do so. 但是我需要OpenGL深度缓冲区的线性化深度值才能这样做。 Currently I am using a computation in the fragment shader (which I found here ). 目前我在片段着色器中使用了一个计算(我在这里找到)。

Therefore my terrain fragment shader looks like this: 因此我的terrain片段着色器看起来像这样:

#version 300 es

precision mediump float;
layout(location = 0) out lowp vec4 out_color;

float linearizeDepth(float depth) {
    return 2.0 * nearz / (farz + nearz - depth * (farz - nearz));
}

void main(void) {
    float depth = gl_FragCoord.z;
    float linearized = (linearizeDepth(depth));
    out_color = vec4(linearized, linearized, linearized, 1.0);
}

However, this results in the following output: 但是,这会产生以下输出:

结果输出 As you can see, the "further" you get away, the more "stripy" the resulting depth value gets (especially behind the ship). 正如你所看到的,你越远“越远”,得到的深度值就越“条纹”(特别是在船后面)。 If the terrain tile is close to the camera, the output is somewhat okay.. 如果地形图块靠近相机,输出有点可以..

I even tried another computation: 我甚至尝试了另一种计算:

float linearizeDepth(float depth) {
    return 2.0 * nearz * farz / (farz + nearz - (2.0 * depth - 1.0) * (farz - nearz));
}

which resulted in a way too high value so I scaled it down by dividing: 这导致了一个太高的价值,所以我通过划分来缩小它:

float linearized = (linearizeDepth(depth) - 2.0) / 40.0;

第二个结果输出

Nevertheless, it gave a similar result. 然而,它给出了类似的结果。

So how do I achieve a smooth, linear transition between the near and the far plane, without any stripes? 那么如何在近平面和远平面之间实现平滑,线性的过渡,没有任何条纹? Has anybody had a similar problem? 有没有人有类似的问题?

the problem is that you store non linear values which are truncated so when you peek the depth values later on you got choppy result because you lose accuracy the more you are far from znear plane. 问题是你存储了被截断的非线性值,因此当你稍后查看深度值时,你会得到波动的结果,因为你失去了准确性,你离znear平面越远。 No matter what you evaluate you will not obtain better results unless: 无论你评价什么,你都不会获得更好的结果,除非:

  1. Lower accuracy loss 精度损失较低

    You can change znear,zfar values so they are closer together. 您可以更改znear,zfar值,使它们更接近。 enlarge znear as much as you can so the more accurate area covers more of your scene. 尽可能多地放大znear,以便更准确的区域覆盖更多的场景。

    Another option is to use more bits per depth buffer (16 bits is too low) not sure if can do this in OpenGL ES but in standard OpenGL you can use 24,32 bits on most cards. 另一个选择是每个深度缓冲区使用更多位(16位太低)不确定是否可以在OpenGL ES中执行此操作,但在标准OpenGL中,您可以在大多数卡上使用24,32位。

  2. use linear depth buffer 使用线性深度缓冲

    So store linear values into depth buffer. 因此将线性值存储到深度缓冲区中。 There are two ways. 有两种方法。 One is compute depth so after all the underlying operations you will get linear value. 一个是计算深度,因此在所有基础操作之后,您将获得线性值。

    Another option is to use separate texture/FBO and store the linear depths directly to it. 另一种选择是使用单独的纹理/ FBO并将线性深度直接存储到它。 The problem is you can not use its contents in the same rendering pass. 问题是你不能在同一个渲染过程中使用它的内容。

[Edit1] Linear Depth buffer [Edit1]线性深度缓冲区

To linearize depth buffer itself (not just the values taken from it) try this: 要线性化深度缓冲区本身(而不仅仅是从中获取的值),请尝试:

Vertex: 顶点:

varying float depth;
void main()
    {
    vec4 p=ftransform();
    depth=p.z;
    gl_Position=p;
    gl_FrontColor = gl_Color;
    }

Fragment: 分段:

uniform float znear,zfar;
varying float depth; // original z in camera space instead of gl_FragCoord.z because is already truncated
void main(void)
    {
    float z=(depth-znear)/(zfar-znear);
    gl_FragDepth=z;
    gl_FragColor=gl_Color;
    }

Non linear Depth buffer linearized on CPU side (as you do): 在CPU端线性化的非线性深度缓冲区(就像你一样): 中央处理器

Linear Depth buffer GPU side (as you should): 线性深度缓冲GPU侧(如您所愿): GPU

The scene parameters are: 场景参数是:

// 24 bits per Depth value
const double zang =   60.0;
const double znear=    0.01;
const double zfar =20000.0;

and simple rotated plate covering whole depth field of view. 简单的旋转板覆盖整个深度视野。 Booth images are taken by glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); 展位图像由glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); and transformed to 2D RGB texture on CPU side. 并在CPU端转换为2D RGB纹理。 Then rendered as single QUAD covering whole screen on unit matrices ... 然后渲染为单个QUAD覆盖单位矩阵的整个屏幕......

Now to obtain original depth value from linear depth buffer you just do this: 现在要从线性深度缓冲区获取原始深度值,您只需执行以下操作:

z = znear + (zfar-znear)*depth_value;

I used the ancient stuff just to keep this simple so port it to your profile ... 我使用古老的东西只是为了保持这个简单,所以把它移到你的个人资料......

Beware I do not code in OpenGL ES nor IOS so I hope I did not miss something related to that (I am used to Win and PC). 请注意,我不在OpenGL ESIOS中进行编码,所以我希望我没有错过与此相关的内容(我习惯于Win和PC)。

To show the difference I added another rotated plate to the same scene (so they intersect) and use colored output (no depth obtaining anymore): 为了显示差异,我将另一个旋转的板添加到同一场景(因此它们相交)并使用彩色输出(不再获得深度):

相交

As you can see linear depth buffer is much much better (for scenes covering large part of depth FOV). 正如您所见,线性深度缓冲区要好得多(对于覆盖大部分深度FOV的场景)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM