简体   繁体   English

对半透明物体实施深度测试

[英]Implementing depth testing for semi-transparent objects

I've been carefully trolling the internet for the past two days to understand depth testing for semi-transparent objects. 在过去的两天里,我一直在仔细地浏览互联网,以了解对半透明对象的深度测试。 I've read multiple papers/tutorials on the subject and in theory I believe I understand how it works. 我已经阅读了有关该主题的多篇论文/教程,并且从理论上讲,我相信我了解它是如何工作的。 However none of them give me actual example code. 但是它们都不给我实际的示例代码。

I have three requirements for my depth testing of semi-transparent objects: 我对半透明对象的深度测试有三个要求:

  1. It should be order independant. 它应该是顺序独立的。

  2. It should work if two quads of the same objects are intersection each other. 如果相同对象的两个四边形彼此相交,则应该起作用。 Both semi-transparent. 两者均为半透明。 Imagine a grass object that looks like a X when viewed from above: 想象一下一个从上方看时看起来像X的草对象:

在此处输入图片说明

  1. It should correctly render a semi-transparent player rgba(0, 1, 0, 0.5) , behind a building's window rgba(0, 0, 1, 0.5) , but in front of a background object rgba(1, 0, 0, 1) : 它应正确渲染半透明播放器rgba(0, 1, 0, 0.5)在建筑物的窗口rgba(0, 0, 1, 0.5) rgba(0, 1, 0, 0.5)后面,但在背景对象rgba(1, 0, 0, 1)

在此处输入图片说明

The line on the far left is how I imagine the light/color changes as it travels through the semi-transparent objects towards the camera 最左边的线是我如何想象光/颜色穿过半透明物体朝向相机时的变化

Final Thoughts 最后的想法

I suspect the best approach to go for is to do depth peeling, but I'm still lacking some implementation/example. 我怀疑最好的方法是进行深度剥离,但是我仍然缺少一些实现/示例。 I'm leaning towards this approach because the game is 2.5D and since it could get dangerous for performance (lots of layers to peel), there won't need to be more than two semi-transparent objects to "peel". 我倾向于这种方法,因为游戏是2.5D的,而且由于性能可能会变得危险(许多层要剥落),因此“剥落”将不需要两个以上的半透明对象。

I'm already familiar with framebuffers and how to code them (doing some post processing effects with them). 我已经熟悉帧缓冲区以及如何对其进行编码(对它们进行一些后期处理效果)。 I will be using them, right? 我将使用它们,对吗?

Most of the knowledge of opengl comes from this tutorial but it covers depth testing and semi-transparency separately. opengl的大多数知识都来自本教程,但它分别涉及深度测试和半透明性。 He also sadly doesn't cover order independent transparency at all (see bottom of Blending page). 可悲的是,他也根本没有涵盖与订单无关的透明度(请参见“混合”页面的底部)。

Finally, please don't answer only in theory. 最后,请不要仅从理论上回答。 eg 例如

Draw opaque, draw transparent, draw opaque again, etc. 绘制不透明,绘制透明,再次绘制不透明等。

My ideal answer will contain code of how the buffers are configured, the shaders, and screenshots of each pass with an explanation of what its doing. 我的理想答案将包含有关如何配置缓冲区,着色器和每个过程的屏幕截图的代码,并说明其工作方式。

The programming language used is also not too important as long as it uses OpenGL 4 or newer. 只要使用OpenGL 4或更高版本,所使用的编程语言也不太重要。 The non-opengl code can be pseudo (I don't care how you sort an array or create an GLFW window). 非opengl代码可以是伪代码(我不在乎如何对数组进行排序或创建GLFW窗口)。

EDIT: 编辑:

I'm updating my question to just have so example of the current state of my code. 我将问题更新为仅提供代码当前状态的示例。 This example draws the semi-transparent player (green) first, opaque background (red) second and then the semi-transparent window (blue). 本示例首先绘制半透明播放器(绿色),然后绘制不透明背景(红色),然后绘制半透明窗口(蓝色)。 However the depth should be calculated by the Z position of the square and not the order of which it is drawn. 但是,深度应通过正方形的Z位置而不是绘制顺序来计算。

 (function() { // your page initialization code here // the DOM will be available here var script = document.createElement('script'); script.onload = function () { main(); }; script.src = 'https://mdn.github.io/webgl-examples/tutorial/gl-matrix.js'; document.head.appendChild(script); //or something of the likes })(); // // Start here // function main() { const canvas = document.querySelector('#glcanvas'); const gl = canvas.getContext('webgl', {alpha:false}); // If we don't have a GL context, give up now if (!gl) { alert('Unable to initialize WebGL. Your browser or machine may not support it.'); return; } // Vertex shader program const vsSource = ` attribute vec4 aVertexPosition; attribute vec4 aVertexColor; uniform mat4 uModelViewMatrix; uniform mat4 uProjectionMatrix; varying lowp vec4 vColor; void main(void) { gl_Position = uProjectionMatrix * uModelViewMatrix * aVertexPosition; vColor = aVertexColor; } `; // Fragment shader program const fsSource = ` varying lowp vec4 vColor; void main(void) { gl_FragColor = vColor; } `; // Initialize a shader program; this is where all the lighting // for the vertices and so forth is established. const shaderProgram = initShaderProgram(gl, vsSource, fsSource); // Collect all the info needed to use the shader program. // Look up which attributes our shader program is using // for aVertexPosition, aVevrtexColor and also // look up uniform locations. const programInfo = { program: shaderProgram, attribLocations: { vertexPosition: gl.getAttribLocation(shaderProgram, 'aVertexPosition'), vertexColor: gl.getAttribLocation(shaderProgram, 'aVertexColor'), }, uniformLocations: { projectionMatrix: gl.getUniformLocation(shaderProgram, 'uProjectionMatrix'), modelViewMatrix: gl.getUniformLocation(shaderProgram, 'uModelViewMatrix'), }, }; // Here's where we call the routine that builds all the // objects we'll be drawing. const buffers = initBuffers(gl); // Draw the scene drawScene(gl, programInfo, buffers); } // // initBuffers // // Initialize the buffers we'll need. For this demo, we just // have one object -- a simple two-dimensional square. // function initBuffers(gl) { // Create a buffer for the square's positions. const positionBuffer0 = gl.createBuffer(); // Select the positionBuffer as the one to apply buffer // operations to from here out. gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer0); // Now create an array of positions for the square. var positions = [ 0.5, 0.5, -0.5, 0.5, 0.5, -0.5, -0.5, -0.5, ]; // Now pass the list of positions into WebGL to build the // shape. We do this by creating a Float32Array from the // JavaScript array, then use it to fill the current buffer. gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW); // Now set up the colors for the vertices var colors = [ 0.0, 1.0, 0.0, 0.5, // white 0.0, 1.0, 0.0, 0.5, // red 0.0, 1.0, 0.0, 0.5, // green 0.0, 1.0, 0.0, 0.5, // blue ]; const colorBuffer0 = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer0); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW); // Create a buffer for the square's positions. const positionBuffer1 = gl.createBuffer(); // Select the positionBuffer as the one to apply buffer // operations to from here out. gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer1); // Now create an array of positions for the square. positions = [ 2.0, 0.4, -2.0, 0.4, 2.0, -2.0, -2.0, -2.0, ]; // Now pass the list of positions into WebGL to build the // shape. We do this by creating a Float32Array from the // JavaScript array, then use it to fill the current buffer. gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW); // Now set up the colors for the vertices colors = [ 1.0, 0.0, 0.0, 1.0, // white 1.0, 0.0, 0.0, 1.0, // red 1.0, 0.0, 0.0, 1.0, // green 1.0, 0.0, 0.0, 1.0, // blue ]; const colorBuffer1 = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer1); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW); // Create a buffer for the square's positions. const positionBuffer2 = gl.createBuffer(); // Select the positionBuffer as the one to apply buffer // operations to from here out. gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer2); // Now create an array of positions for the square. positions = [ 1.0, 1.0, -0.0, 1.0, 1.0, -1.0, -0.0, -1.0, ]; // Now pass the list of positions into WebGL to build the // shape. We do this by creating a Float32Array from the // JavaScript array, then use it to fill the current buffer. gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW); // Now set up the colors for the vertices colors = [ 0.0, 0.0, 1.0, 0.5, // white 0.0, 0.0, 1.0, 0.5, // red 0.0, 0.0, 1.0, 0.5, // green 0.0, 0.0, 1.0, 0.5, // blue ]; const colorBuffer2 = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer2); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW); return { position0: positionBuffer0, color0: colorBuffer0, position1: positionBuffer1, color1: colorBuffer1, position2: positionBuffer2, color2: colorBuffer2, }; } // // Draw the scene. // function drawScene(gl, programInfo, buffers) { gl.clearColor(0.0, 0.0, 0.0, 1.0); // Clear to black, fully opaque gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); //gl.clearDepth(1.0); // Clear everything gl.disable(gl.DEPTH_TEST) gl.enable(gl.BLEND) gl.blendEquation(gl.FUNC_ADD) gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA) // Clear the canvas before we start drawing on it. // Create a perspective matrix, a special matrix that is // used to simulate the distortion of perspective in a camera. // Our field of view is 45 degrees, with a width/height // ratio that matches the display size of the canvas // and we only want to see objects between 0.1 units // and 100 units away from the camera. const fieldOfView = 45 * Math.PI / 180; // in radians const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; const zNear = 0.1; const zFar = 100.0; const projectionMatrix = mat4.create(); // note: glmatrix.js always has the first argument // as the destination to receive the result. mat4.perspective(projectionMatrix, fieldOfView, aspect, zNear, zFar); // Set the drawing position to the "identity" point, which is // the center of the scene. const modelViewMatrix = mat4.create(); // Now move the drawing position a bit to where we want to // start drawing the square. mat4.translate(modelViewMatrix, // destination matrix modelViewMatrix, // matrix to translate [-0.0, 0.0, -6.0]); // amount to translate function drawSquare(positionbuffer, colorbuffer) { // Tell WebGL how to pull out the positions from the position // buffer into the vertexPosition attribute { const numComponents = 2; const type = gl.FLOAT; const normalize = false; const stride = 0; const offset = 0; gl.bindBuffer(gl.ARRAY_BUFFER, positionbuffer); gl.vertexAttribPointer( programInfo.attribLocations.vertexPosition, numComponents, type, normalize, stride, offset); gl.enableVertexAttribArray( programInfo.attribLocations.vertexPosition); } // Tell WebGL how to pull out the colors from the color buffer // into the vertexColor attribute. { const numComponents = 4; const type = gl.FLOAT; const normalize = false; const stride = 0; const offset = 0; gl.bindBuffer(gl.ARRAY_BUFFER, colorbuffer); gl.vertexAttribPointer( programInfo.attribLocations.vertexColor, numComponents, type, normalize, stride, offset); gl.enableVertexAttribArray( programInfo.attribLocations.vertexColor); } // Tell WebGL to use our program when drawing gl.useProgram(programInfo.program); // Set the shader uniforms gl.uniformMatrix4fv( programInfo.uniformLocations.projectionMatrix, false, projectionMatrix); gl.uniformMatrix4fv( programInfo.uniformLocations.modelViewMatrix, false, modelViewMatrix); { const offset = 0; const vertexCount = 4; gl.drawArrays(gl.TRIANGLE_STRIP, offset, vertexCount); } } drawSquare(buffers.position0, buffers.color0); // Player drawSquare(buffers.position1, buffers.color1); // Background drawSquare(buffers.position2, buffers.color2); // Window } // // Initialize a shader program, so WebGL knows how to draw our data // function initShaderProgram(gl, vsSource, fsSource) { const vertexShader = loadShader(gl, gl.VERTEX_SHADER, vsSource); const fragmentShader = loadShader(gl, gl.FRAGMENT_SHADER, fsSource); // Create the shader program const shaderProgram = gl.createProgram(); gl.attachShader(shaderProgram, vertexShader); gl.attachShader(shaderProgram, fragmentShader); gl.linkProgram(shaderProgram); // If creating the shader program failed, alert if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) { alert('Unable to initialize the shader program: ' + gl.getProgramInfoLog(shaderProgram)); return null; } return shaderProgram; } // // creates a shader of the given type, uploads the source and // compiles it. // function loadShader(gl, type, source) { const shader = gl.createShader(type); // Send the source to the shader object gl.shaderSource(shader, source); // Compile the shader program gl.compileShader(shader); // See if it compiled successfully if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) { alert('An error occurred compiling the shaders: ' + gl.getShaderInfoLog(shader)); gl.deleteShader(shader); return null; } return shader; } 
 <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title></title> </head> <body> <canvas id="glcanvas" width="640" height="480"></canvas> </body> </html> 

I have three requirements for my depth testing of semi-transparent objects 对半透明对象的深度测试有三个要求

It's actually quite rare to have self-intersecting objects with partially transparent (actually blended) samples. 具有部分透明(实际上是混合的)样本的自相交对象实际上非常罕见。 The common cases for self-intersecting geometry is grass and leaves. 自相交几何的常见情况是草和树叶。 However, in these cases the actual areas covered by grass and leaves are not transparent - they are opaque. 但是,在这些情况下,草和树叶覆盖的实际区域不是透明的-它们是不透明的。

The common solution here is alpha testing. 常见的解决方案是alpha测试。 Render the leaves as an opaque (not blended) quad (with a normal depth test and write), and discard fragments which have insufficient alpha (eg because they are outside of the leaf). 将叶子渲染为不透明(未混合)的四边形(使用正常的深度测试并写入),并丢弃Alpha值不足的片段(例如,因为它们在叶子外面)。 Because individual samples here are opaque, then you get order independence for free because the depth test works as you would expect for an opaque object. 由于此处的单个样本是不透明的,因此您可以免费获得订单独立性,因为深度测试的工作方式与不透明对象的预期相同。

If you want blended edges, then enable alpha-to-coverage and let the multi-sample resolve clean up the edges a little. 如果要混合边缘,请启用“覆盖前的透明度”,并让多样本解析稍微清理一下边缘。

For the small amount of actually transparent stuff you have left, then normally you need to a back-to-front sort on the CPU, and render it after the opaque pass. 对于剩下的少量实际透明的东西,通常需要在CPU上进行从前到后的排序,并在不透明的传递之后进行渲染。

Proper OIT is possible, but is is generally quite an expensive technique, so I've yet to see anyone actually use it outside of an academic environment (at least on mobile OpenGL ES implementations). 适当的OIT是可行的,但通常是相当昂贵的技术,因此我还没有看到有人在学术环境之外(至少在移动OpenGL ES实现上)实际使用它。

This seems to be the what the paper linked by ripi2 is doing 这似乎是ripi2链接的文件正在执行的操作

 function main() { const m4 = twgl.m4; const gl = document.querySelector('canvas').getContext('webgl2', {alpha: false}); if (!gl) { alert('need WebGL2'); return; } const ext = gl.getExtension('EXT_color_buffer_float'); if (!ext) { alert('EXT_color_buffer_float'); return; } const vs = ` #version 300 es layout(location=0) in vec4 position; uniform mat4 u_matrix; void main() { gl_Position = u_matrix * position; } `; const checkerFS = ` #version 300 es precision highp float; uniform vec4 color1; uniform vec4 color2; out vec4 fragColor; void main() { ivec2 grid = ivec2(gl_FragCoord.xy) / 32; fragColor = mix(color1, color2, float((grid.x + grid.y) % 2)); } `; const transparentFS = ` #version 300 es precision highp float; uniform vec4 Ci; out vec4 fragData[2]; float w(float z, float a) { return a * max(pow(10.0,-2.0),3.0*pow(10.0,3.0)*pow((1.0 - z), 3.)); } void main() { float ai = Ci.a; float zi = gl_FragCoord.z; float wresult = w(zi, ai); fragData[0] = vec4(Ci.rgb * wresult, ai); fragData[1].r = ai * wresult; } `; const compositeFS = ` #version 300 es precision highp float; uniform sampler2D ATexture; uniform sampler2D BTexture; out vec4 fragColor; void main() { vec4 accum = texelFetch(ATexture, ivec2(gl_FragCoord.xy), 0); float r = accum.a; accum.a = texelFetch(BTexture, ivec2(gl_FragCoord.xy), 0).r; fragColor = vec4(accum.rgb / clamp(accum.a, 1e-4, 5e4), r); } `; const checkerProgramInfo = twgl.createProgramInfo(gl, [vs, checkerFS]); const transparentProgramInfo = twgl.createProgramInfo(gl, [vs, transparentFS]); const compositeProgramInfo = twgl.createProgramInfo(gl, [vs, compositeFS]); const bufferInfo = twgl.primitives.createXYQuadBufferInfo(gl); const fbi = twgl.createFramebufferInfo( gl, [ { internalFormat: gl.RGBA32F, minMag: gl.NEAREST }, { internalFormat: gl.R32F, minMag: gl.NEAREST }, ]); function render(time) { time *= 0.001; twgl.setBuffersAndAttributes(gl, transparentProgramInfo, bufferInfo); // drawOpaqueSurfaces(); gl.useProgram(checkerProgramInfo.program); gl.disable(gl.BLEND); twgl.setUniforms(checkerProgramInfo, { color1: [.5, .5, .5, 1], color2: [.7, .7, .7, 1], u_matrix: m4.identity(), }); twgl.drawBufferInfo(gl, bufferInfo); twgl.bindFramebufferInfo(gl, fbi); gl.drawBuffers([gl.COLOR_ATTACHMENT0, gl.COLOR_ATTACHMENT1]); gl.clearBufferfv(gl.COLOR, 0, new Float32Array([0, 0, 0, 1])); gl.clearBufferfv(gl.COLOR, 1, new Float32Array([1, 1, 1, 1])); gl.depthMask(false); gl.enable(gl.BLEND); gl.blendFuncSeparate(gl.ONE, gl.ONE, gl.ZERO, gl.ONE_MINUS_SRC_ALPHA); gl.useProgram(transparentProgramInfo.program); // drawTransparentSurfaces(); const quads = [ [ .4, 0, 0, .4], [ .4, .4, 0, .4], [ 0, .4, 0, .4], [ 0, .4, .4, .4], [ 0, .0, .4, .4], [ .4, .0, .4, .4], ]; quads.forEach((color, ndx) => { const u = ndx / (quads.length - 1); // change the order every second const v = ((ndx + time | 0) % quads.length) / (quads.length - 1); const xy = (u * 2 - 1) * .25; const z = (v * 2 - 1) * .25; let mat = m4.identity(); mat = m4.translate(mat, [xy, xy, z]); mat = m4.scale(mat, [.3, .3, 1]); twgl.setUniforms(transparentProgramInfo, { Ci: color, u_matrix: mat, }); twgl.drawBufferInfo(gl, bufferInfo); }); twgl.bindFramebufferInfo(gl, null); gl.drawBuffers([gl.BACK]); gl.blendFunc(gl.ONE_MINUS_SRC_ALPHA, gl.SRC_ALPHA); gl.useProgram(compositeProgramInfo.program); twgl.setUniforms(compositeProgramInfo, { ATexture: fbi.attachments[0], BTexture: fbi.attachments[1], u_matrix: m4.identity(), }); twgl.drawBufferInfo(gl, bufferInfo); /* only needed if {alpha: false} not passed into getContext gl.colorMask(false, false, false, true); gl.clearColor(1, 1, 1, 1); gl.clear(gl.COLOR_BUFFER_BIT); gl.colorMask(true, true, true, true); */ requestAnimationFrame(render); } requestAnimationFrame(render); } main(); 
 <canvas></canvas> <script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script> 

Some things to note: 注意事项:

  • It's using WebGL2 but it should be possible in WebGL1, you'd have to change the shaders to use GLSL ES 1.0. 它使用的是WebGL2,但在WebGL1中应该可以,您必须更改着色器才能使用GLSL ES 1.0。
  • It's using floating point textures. 它使用浮点纹理。 The paper mentions you can use half float textures as well. 本文提到您也可以使用半浮点纹理。 Note that rendering to both half and float textures is an optional feature in even WebGL2. 请注意,即使在WebGL2中,同时渲染一半和浮点纹理也是可选功能。 I believe most mobile hardware can render to half but not to float. 我相信大多数移动硬件可以渲染一半,但不能浮动。
  • It's using weight equation 10 from the paper. 它使用本文中的权重方程式10。 There are 4 weight equations in the paper. 本文中有4个权重方程。 7, 8, 9, and 10. To do 7, 8, or 9 you'd need to pass in view space z from the vertex shader to the fragment shader 7、8、9和10。要执行7、8或9,您需要将视图空间z从顶点着色器传递到片段着色器
  • It's switching the order of drawing every second 每秒切换一次绘图顺序

The code is pretty straight forward. 该代码非常简单。

It creates 3 shaders. 它创建3个着色器。 One to draw a checkerboard just so we have something that is opaque to see the transparent stuff drawn above. 绘制棋盘格的目的是使我们看到上面绘制的透明物体不透明。 One is the transparent object shader. 一种是透明对象着色器。 The last is the shader the composites the transparent stuff into the scene. 最后是着色器,用于将透明材质合成到场景中。

Next it makes 2 textures, a floating point RGBA32F texture and a floating point R32F texture (red channel only). 接下来,它制作2个纹理,一个浮点RGBA32F纹理和一个浮点R32F纹理(仅红色通道)。 It attaches those to a framebuffer. 它将那些附加到帧缓冲区。 (that is all done in the 1 function, twgl.createFramebufferInfo . That function makes the textures the same size as the canvas by default. (所有操作均在1函数twgl.createFramebufferInfo 。默认情况下,该函数使纹理的大小与画布的大小相同。

We make a single quad that goes from -1 to +1 我们制作一个从-1到+1的四边形

We use that quad to draw the checkerboard into the canvas 我们使用该四边形将棋盘格绘制到画布中

Then we turn on blending, setup the blend equations as the paper said, switch to rendering onto our framebuffer, clear that framebuffer. 然后,我们打开混合,如论文所述设置混合方程式,切换到渲染到我们的帧缓冲区,清除该帧缓冲区。 note, it's cleared to 0,0,0,1 and 1 respectively. 请注意,它分别清除为0,0,0,1和1。 This is the version where we don't have separate blend functions per draw buffer. 在此版本中,每个绘图缓冲区没有单独的混合函数。 If you switch to the version that can use separate blending functions per draw buffer then you need to clear to different values and use a different shader (See paper) 如果切换到每个绘图缓冲区可以使用单独的混合功能的版本,则需要清除为不同的值并使用不同的着色器(请参见论文)

Using our transparency shader we that same quad to draw 6 rectangles each of a solid color. 使用我们的透明着色器,我们可以在同一个四边形上绘制每个纯色的6个矩形。 I just used a solid color to keep it simple. 我只是使用纯色使其简单。 Each is at a different Z and the Zs change every second just to see the results Z changing. 每个Z都在不同的Z上,Z每秒变化一次,只是为了看到结果Z在变化。

In the shader Ci is the input color. 在着色器中, Ci是输入颜色。 It's expected to be a premultiplied alpha color according to the paper. 根据该论文,它应该是预乘的alpha颜色。 fragData[0] is the "accumulate" texture and fragData[1] is the "revealage" texture and is only one channel, red. The fragData [0] is the "accumulate" texture and fragData [1] is the "revealage" texture and is only one channel, red. The is the "revealage" texture and is only one channel, red. The w` function represents the equation 10 from the paper. is the "revealage" texture and is only one channel, red. The w`函数表示从纸的方程10。

After all 6 quads are drawn we switch back to rendering to the canvas and use the compositing shader to composite the transparency result with the non-transparent canvas contents. 在绘制完所有6个四边形之后,我们切换回渲染到画布,并使用合成着色器将透明度结果与非透明画布内容进行合成。

Here's an example with some geometry. 这是一些几何的示例。 Differences: 区别:

  • It's using equations (7) from the paper instead of (10) 它使用论文中的方程式(7)代替(10)
  • In order to do correct zbuffering the depth buffer needs to be shared when doing opaque and transparent rendering. 为了进行正确的z缓冲,在进行不透明和透明渲染时需要共享深度缓冲区。 So there are 2 frames buffers. 因此,有2个帧缓冲区。 One buffer has RGBA8 + depth, the other is RGBA32F + R32F + depth. 一个缓冲区具有RGBA8 +深度,另一个缓冲区具有RGBA32F + R32F +深度。 The depth buffer is shared. 深度缓冲区是共享的。
  • The transparent renderer computes simple lighting and then uses the result as the Ci value from the paper 透明渲染器计算简单的照明,然后将结果用作纸张的Ci
  • After compositing the transparent into the opaque we still need to copy the opaque into the canvas to see the result 将透明材料合成为不透明材料后,我们仍然需要将不透明材料复制到画布中以查看结果

 function main() { const m4 = twgl.m4; const v3 = twgl.v3; const gl = document.querySelector('canvas').getContext('webgl2', {alpha: false}); if (!gl) { alert('need WebGL2'); return; } const ext = gl.getExtension('EXT_color_buffer_float'); if (!ext) { alert('EXT_color_buffer_float'); return; } const vs = ` #version 300 es layout(location=0) in vec4 position; layout(location=1) in vec3 normal; uniform mat4 u_projection; uniform mat4 u_modelView; out vec4 v_viewPosition; out vec3 v_normal; void main() { gl_Position = u_projection * u_modelView * position; v_viewPosition = u_modelView * position; v_normal = (u_modelView * vec4(normal, 0)).xyz; } `; const checkerFS = ` #version 300 es precision highp float; uniform vec4 color1; uniform vec4 color2; out vec4 fragColor; void main() { ivec2 grid = ivec2(gl_FragCoord.xy) / 32; fragColor = mix(color1, color2, float((grid.x + grid.y) % 2)); } `; const opaqueFS = ` #version 300 es precision highp float; in vec4 v_viewPosition; in vec3 v_normal; uniform vec4 u_color; uniform vec3 u_lightDirection; out vec4 fragColor; void main() { float light = abs(dot(normalize(v_normal), u_lightDirection)); fragColor = vec4(u_color.rgb * light, u_color.a); } `; const transparentFS = ` #version 300 es precision highp float; uniform vec4 u_color; uniform vec3 u_lightDirection; in vec4 v_viewPosition; in vec3 v_normal; out vec4 fragData[2]; // eq (7) float w(float z, float a) { return a * max( pow(10.0, -2.0), min( 3.0 * pow(10.0, 3.0), 10.0 / (pow(10.0, -5.0) + pow(abs(z) / 5.0, 2.0) + pow(abs(z) / 200.0, 6.0) ) ) ); } void main() { float light = abs(dot(normalize(v_normal), u_lightDirection)); vec4 Ci = vec4(u_color.rgb * light, u_color.a); float ai = Ci.a; float zi = gl_FragCoord.z; float wresult = w(zi, ai); fragData[0] = vec4(Ci.rgb * wresult, ai); fragData[1].r = ai * wresult; } `; const compositeFS = ` #version 300 es precision highp float; uniform sampler2D ATexture; uniform sampler2D BTexture; out vec4 fragColor; void main() { vec4 accum = texelFetch(ATexture, ivec2(gl_FragCoord.xy), 0); float r = accum.a; accum.a = texelFetch(BTexture, ivec2(gl_FragCoord.xy), 0).r; fragColor = vec4(accum.rgb / clamp(accum.a, 1e-4, 5e4), r); } `; const blitFS = ` #version 300 es precision highp float; uniform sampler2D u_texture; out vec4 fragColor; void main() { fragColor = texelFetch(u_texture, ivec2(gl_FragCoord.xy), 0); } `; const checkerProgramInfo = twgl.createProgramInfo(gl, [vs, checkerFS]); const opaqueProgramInfo = twgl.createProgramInfo(gl, [vs, opaqueFS]); const transparentProgramInfo = twgl.createProgramInfo(gl, [vs, transparentFS]); const compositeProgramInfo = twgl.createProgramInfo(gl, [vs, compositeFS]); const blitProgramInfo = twgl.createProgramInfo(gl, [vs, blitFS]); const xyQuadVertexArrayInfo = makeVAO(checkerProgramInfo, twgl.primitives.createXYQuadBufferInfo(gl)); const sphereVertexArrayInfo = makeVAO(transparentProgramInfo, twgl.primitives.createSphereBufferInfo(gl, 1, 16, 12)); const cubeVertexArrayInfo = makeVAO(opaqueProgramInfo, twgl.primitives.createCubeBufferInfo(gl, 1, 1)); function makeVAO(programInfo, bufferInfo) { return twgl.createVertexArrayInfo(gl, programInfo, bufferInfo); } // In order to do proper zbuffering we need to share // the depth buffer const opaqueAttachments = [ { internalFormat: gl.RGBA8, minMag: gl.NEAREST }, { format: gl.DEPTH_COMPONENT16, minMag: gl.NEAREST }, ]; const opaqueFBI = twgl.createFramebufferInfo(gl, opaqueAttachments); const transparentAttachments = [ { internalFormat: gl.RGBA32F, minMag: gl.NEAREST }, { internalFormat: gl.R32F, minMag: gl.NEAREST }, { format: gl.DEPTH_COMPONENT16, minMag: gl.NEAREST, attachment: opaqueFBI.attachments[1] }, ]; const transparentFBI = twgl.createFramebufferInfo(gl, transparentAttachments); function render(time) { time *= 0.001; if (twgl.resizeCanvasToDisplaySize(gl.canvas)) { // if the canvas is resized also resize the framebuffer // attachments (the depth buffer will be resized twice // but I'm too lazy to fix it) twgl.resizeFramebufferInfo(gl, opaqueFBI, opaqueAttachments); twgl.resizeFramebufferInfo(gl, transparentFBI, transparentAttachments); } const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; const fov = 45 * Math.PI / 180; const zNear = 0.1; const zFar = 500; const projection = m4.perspective(fov, aspect, zNear, zFar); const eye = [0, 0, -5]; const target = [0, 0, 0]; const up = [0, 1, 0]; const camera = m4.lookAt(eye, target, up); const view = m4.inverse(camera); const lightDirection = v3.normalize([1, 3, 5]); twgl.bindFramebufferInfo(gl, opaqueFBI); gl.drawBuffers([gl.COLOR_ATTACHMENT0]); gl.depthMask(true); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.bindVertexArray(xyQuadVertexArrayInfo.vertexArrayObject); // drawOpaqueSurfaces(); // draw checkerboard gl.useProgram(checkerProgramInfo.program); gl.disable(gl.DEPTH_TEST); gl.disable(gl.BLEND); twgl.setUniforms(checkerProgramInfo, { color1: [.5, .5, .5, 1], color2: [.7, .7, .7, 1], u_projection: m4.identity(), u_modelView: m4.identity(), }); twgl.drawBufferInfo(gl, xyQuadVertexArrayInfo); // draw a cube with depth buffer gl.enable(gl.DEPTH_TEST); { gl.useProgram(opaqueProgramInfo.program); gl.bindVertexArray(cubeVertexArrayInfo.vertexArrayObject); let mat = view; mat = m4.rotateX(mat, time * .1); mat = m4.rotateY(mat, time * .2); mat = m4.scale(mat, [1.5, 1.5, 1.5]); twgl.setUniforms(opaqueProgramInfo, { u_color: [1, .5, .2, 1], u_lightDirection: lightDirection, u_projection: projection, u_modelView: mat, }); twgl.drawBufferInfo(gl, cubeVertexArrayInfo); } twgl.bindFramebufferInfo(gl, transparentFBI); gl.drawBuffers([gl.COLOR_ATTACHMENT0, gl.COLOR_ATTACHMENT1]); // these values change if using separate blend functions // per attachment (something WebGL2 does not support) gl.clearBufferfv(gl.COLOR, 0, new Float32Array([0, 0, 0, 1])); gl.clearBufferfv(gl.COLOR, 1, new Float32Array([1, 1, 1, 1])); gl.depthMask(false); // don't write to depth buffer (but still testing) gl.enable(gl.BLEND); // this changes if using separate blend functions per attachment gl.blendFuncSeparate(gl.ONE, gl.ONE, gl.ZERO, gl.ONE_MINUS_SRC_ALPHA); gl.useProgram(transparentProgramInfo.program); gl.bindVertexArray(sphereVertexArrayInfo.vertexArrayObject); // drawTransparentSurfaces(); const spheres = [ [ .4, 0, 0, .4], [ .4, .4, 0, .4], [ 0, .4, 0, .4], [ 0, .4, .4, .4], [ 0, .0, .4, .4], [ .4, .0, .4, .4], ]; spheres.forEach((color, ndx) => { const u = ndx + 2; let mat = view; mat = m4.rotateX(mat, time * u * .1); mat = m4.rotateY(mat, time * u * .2); mat = m4.translate(mat, [0, 0, 1 + ndx * .1]); twgl.setUniforms(transparentProgramInfo, { u_color: color, u_lightDirection: lightDirection, u_projection: projection, u_modelView: mat, }); twgl.drawBufferInfo(gl, sphereVertexArrayInfo); }); // composite transparent results with opaque twgl.bindFramebufferInfo(gl, opaqueFBI); gl.drawBuffers([gl.COLOR_ATTACHMENT0]); gl.disable(gl.DEPTH_TEST); gl.blendFunc(gl.ONE_MINUS_SRC_ALPHA, gl.SRC_ALPHA); gl.useProgram(compositeProgramInfo.program); gl.bindVertexArray(xyQuadVertexArrayInfo.vertexArrayObject); twgl.setUniforms(compositeProgramInfo, { ATexture: transparentFBI.attachments[0], BTexture: transparentFBI.attachments[1], u_projection: m4.identity(), u_modelView: m4.identity(), }); twgl.drawBufferInfo(gl, xyQuadVertexArrayInfo); /* only needed if {alpha: false} not passed into getContext gl.colorMask(false, false, false, true); gl.clearColor(1, 1, 1, 1); gl.clear(gl.COLOR_BUFFER_BIT); gl.colorMask(true, true, true, true); */ // draw opaque color buffer into canvas // could probably use gl.blitFramebuffer gl.disable(gl.BLEND); twgl.bindFramebufferInfo(gl, null); gl.useProgram(blitProgramInfo.program); gl.bindVertexArray(xyQuadVertexArrayInfo.vertexArrayObject); twgl.setUniforms(blitProgramInfo, { u_texture: opaqueFBI.attachments[0], u_projection: m4.identity(), u_modelView: m4.identity(), }); twgl.drawBufferInfo(gl, xyQuadVertexArrayInfo); requestAnimationFrame(render); } requestAnimationFrame(render); } main(); 
 body { margin: 0; } canvas { width: 100vw; height: 100vh; display: block; } 
 <canvas></canvas> <script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script> 

It occurs to me rather than use standard OpenGL blending for the last 2 steps (composite followed by blit) we could change the composite shader so it takes 3 textures (ATexutre, BTexture, opaqueTexture) and blends in the shader outputting directly to the canvas. 它发生在我身上,而不是在最后2个步骤中使用标准OpenGL混合(复合后跟blit),我们可以更改复合材质球,使其采用3种纹理(ATexutre,BTexture,opaqueTexture)并在材质球中混合,直接输出到画布。 That would be faster. 那会更快。

 function main() { const m4 = twgl.m4; const v3 = twgl.v3; const gl = document.querySelector('canvas').getContext('webgl2', {alpha: false}); if (!gl) { alert('need WebGL2'); return; } const ext = gl.getExtension('EXT_color_buffer_float'); if (!ext) { alert('EXT_color_buffer_float'); return; } const vs = ` #version 300 es layout(location=0) in vec4 position; layout(location=1) in vec3 normal; uniform mat4 u_projection; uniform mat4 u_modelView; out vec4 v_viewPosition; out vec3 v_normal; void main() { gl_Position = u_projection * u_modelView * position; v_viewPosition = u_modelView * position; v_normal = (u_modelView * vec4(normal, 0)).xyz; } `; const checkerFS = ` #version 300 es precision highp float; uniform vec4 color1; uniform vec4 color2; out vec4 fragColor; void main() { ivec2 grid = ivec2(gl_FragCoord.xy) / 32; fragColor = mix(color1, color2, float((grid.x + grid.y) % 2)); } `; const opaqueFS = ` #version 300 es precision highp float; in vec4 v_viewPosition; in vec3 v_normal; uniform vec4 u_color; uniform vec3 u_lightDirection; out vec4 fragColor; void main() { float light = abs(dot(normalize(v_normal), u_lightDirection)); fragColor = vec4(u_color.rgb * light, u_color.a); } `; const transparentFS = ` #version 300 es precision highp float; uniform vec4 u_color; uniform vec3 u_lightDirection; in vec4 v_viewPosition; in vec3 v_normal; out vec4 fragData[2]; // eq (7) float w(float z, float a) { return a * max( pow(10.0, -2.0), min( 3.0 * pow(10.0, 3.0), 10.0 / (pow(10.0, -5.0) + pow(abs(z) / 5.0, 2.0) + pow(abs(z) / 200.0, 6.0) ) ) ); } void main() { float light = abs(dot(normalize(v_normal), u_lightDirection)); vec4 Ci = vec4(u_color.rgb * light, u_color.a); float ai = Ci.a; float zi = gl_FragCoord.z; float wresult = w(zi, ai); fragData[0] = vec4(Ci.rgb * wresult, ai); fragData[1].r = ai * wresult; } `; const compositeFS = ` #version 300 es precision highp float; uniform sampler2D ATexture; uniform sampler2D BTexture; uniform sampler2D opaqueTexture; out vec4 fragColor; void main() { vec4 accum = texelFetch(ATexture, ivec2(gl_FragCoord.xy), 0); float r = accum.a; accum.a = texelFetch(BTexture, ivec2(gl_FragCoord.xy), 0).r; vec4 transparentColor = vec4(accum.rgb / clamp(accum.a, 1e-4, 5e4), r); vec4 opaqueColor = texelFetch(opaqueTexture, ivec2(gl_FragCoord.xy), 0); // gl.blendFunc(gl.ONE_MINUS_SRC_ALPHA, gl.SRC_ALPHA); fragColor = transparentColor * (1. - r) + opaqueColor * r; } `; const checkerProgramInfo = twgl.createProgramInfo(gl, [vs, checkerFS]); const opaqueProgramInfo = twgl.createProgramInfo(gl, [vs, opaqueFS]); const transparentProgramInfo = twgl.createProgramInfo(gl, [vs, transparentFS]); const compositeProgramInfo = twgl.createProgramInfo(gl, [vs, compositeFS]); const xyQuadVertexArrayInfo = makeVAO(checkerProgramInfo, twgl.primitives.createXYQuadBufferInfo(gl)); const sphereVertexArrayInfo = makeVAO(transparentProgramInfo, twgl.primitives.createSphereBufferInfo(gl, 1, 16, 12)); const cubeVertexArrayInfo = makeVAO(opaqueProgramInfo, twgl.primitives.createCubeBufferInfo(gl, 1, 1)); function makeVAO(programInfo, bufferInfo) { return twgl.createVertexArrayInfo(gl, programInfo, bufferInfo); } // In order to do proper zbuffering we need to share // the depth buffer const opaqueAttachments = [ { internalFormat: gl.RGBA8, minMag: gl.NEAREST }, { format: gl.DEPTH_COMPONENT16, minMag: gl.NEAREST }, ]; const opaqueFBI = twgl.createFramebufferInfo(gl, opaqueAttachments); const transparentAttachments = [ { internalFormat: gl.RGBA32F, minMag: gl.NEAREST }, { internalFormat: gl.R32F, minMag: gl.NEAREST }, { format: gl.DEPTH_COMPONENT16, minMag: gl.NEAREST, attachment: opaqueFBI.attachments[1] }, ]; const transparentFBI = twgl.createFramebufferInfo(gl, transparentAttachments); function render(time) { time *= 0.001; if (twgl.resizeCanvasToDisplaySize(gl.canvas)) { // if the canvas is resized also resize the framebuffer // attachments (the depth buffer will be resized twice // but I'm too lazy to fix it) twgl.resizeFramebufferInfo(gl, opaqueFBI, opaqueAttachments); twgl.resizeFramebufferInfo(gl, transparentFBI, transparentAttachments); } const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; const fov = 45 * Math.PI / 180; const zNear = 0.1; const zFar = 500; const projection = m4.perspective(fov, aspect, zNear, zFar); const eye = [0, 0, -5]; const target = [0, 0, 0]; const up = [0, 1, 0]; const camera = m4.lookAt(eye, target, up); const view = m4.inverse(camera); const lightDirection = v3.normalize([1, 3, 5]); twgl.bindFramebufferInfo(gl, opaqueFBI); gl.drawBuffers([gl.COLOR_ATTACHMENT0]); gl.depthMask(true); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.bindVertexArray(xyQuadVertexArrayInfo.vertexArrayObject); // drawOpaqueSurfaces(); // draw checkerboard gl.useProgram(checkerProgramInfo.program); gl.disable(gl.DEPTH_TEST); gl.disable(gl.BLEND); twgl.setUniforms(checkerProgramInfo, { color1: [.5, .5, .5, 1], color2: [.7, .7, .7, 1], u_projection: m4.identity(), u_modelView: m4.identity(), }); twgl.drawBufferInfo(gl, xyQuadVertexArrayInfo); // draw a cube with depth buffer gl.enable(gl.DEPTH_TEST); { gl.useProgram(opaqueProgramInfo.program); gl.bindVertexArray(cubeVertexArrayInfo.vertexArrayObject); let mat = view; mat = m4.rotateX(mat, time * .1); mat = m4.rotateY(mat, time * .2); mat = m4.scale(mat, [1.5, 1.5, 1.5]); twgl.setUniforms(opaqueProgramInfo, { u_color: [1, .5, .2, 1], u_lightDirection: lightDirection, u_projection: projection, u_modelView: mat, }); twgl.drawBufferInfo(gl, cubeVertexArrayInfo); } twgl.bindFramebufferInfo(gl, transparentFBI); gl.drawBuffers([gl.COLOR_ATTACHMENT0, gl.COLOR_ATTACHMENT1]); // these values change if using separate blend functions // per attachment (something WebGL2 does not support) gl.clearBufferfv(gl.COLOR, 0, new Float32Array([0, 0, 0, 1])); gl.clearBufferfv(gl.COLOR, 1, new Float32Array([1, 1, 1, 1])); gl.depthMask(false); // don't write to depth buffer (but still testing) gl.enable(gl.BLEND); // this changes if using separate blend functions per attachment gl.blendFuncSeparate(gl.ONE, gl.ONE, gl.ZERO, gl.ONE_MINUS_SRC_ALPHA); gl.useProgram(transparentProgramInfo.program); gl.bindVertexArray(sphereVertexArrayInfo.vertexArrayObject); // drawTransparentSurfaces(); const spheres = [ [ .4, 0, 0, .4], [ .4, .4, 0, .4], [ 0, .4, 0, .4], [ 0, .4, .4, .4], [ 0, .0, .4, .4], [ .4, .0, .4, .4], ]; spheres.forEach((color, ndx) => { const u = ndx + 2; let mat = view; mat = m4.rotateX(mat, time * u * .1); mat = m4.rotateY(mat, time * u * .2); mat = m4.translate(mat, [0, 0, 1 + ndx * .1]); twgl.setUniforms(transparentProgramInfo, { u_color: color, u_lightDirection: lightDirection, u_projection: projection, u_modelView: mat, }); twgl.drawBufferInfo(gl, sphereVertexArrayInfo); }); // composite transparent results with opaque twgl.bindFramebufferInfo(gl, null); gl.disable(gl.DEPTH_TEST); gl.disable(gl.BLEND); gl.useProgram(compositeProgramInfo.program); gl.bindVertexArray(xyQuadVertexArrayInfo.vertexArrayObject); twgl.setUniforms(compositeProgramInfo, { ATexture: transparentFBI.attachments[0], BTexture: transparentFBI.attachments[1], opaqueTexture: opaqueFBI.attachments[0], u_projection: m4.identity(), u_modelView: m4.identity(), }); twgl.drawBufferInfo(gl, xyQuadVertexArrayInfo); /* only needed if {alpha: false} not passed into getContext gl.colorMask(false, false, false, true); gl.clearColor(1, 1, 1, 1); gl.clear(gl.COLOR_BUFFER_BIT); gl.colorMask(true, true, true, true); */ requestAnimationFrame(render); } requestAnimationFrame(render); } main(); 
 body { margin: 0; } canvas { width: 100vw; height: 100vh; display: block; } 
 <canvas></canvas> <script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script> 

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM