简体   繁体   English

SSAO的depthTexture错误

[英]Incorrect depthTexture with SSAO

I've been puzzled lately as I've been attempting to get a THREE.DepthTexture to work with the Ambient Occlusion shader. 最近,当我尝试获取THREE.DepthTexture与Ambient Occlusion着色器一起使用时,我一直感到困惑。 I've had it working before with RGBA unpacking, but after reading about Matt Deslauriers's project, Audiograph, I decided to attempt the method he described for a potential performance boost: 在进行RGBA解压缩之前,我已经使用过它,但是在阅读了有关Matt Deslauriers的Audiograph项目之后,我决定尝试使用他描述的方法来提高性能:

Historically in ThreeJS, you would render your scene with MeshDepthMaterial to a WebGLRenderTarget, and then unpack to a linear depth value when sampling from the depth target. 从历史上讲,在ThreeJS中,您将使用MeshDepthMaterial将场景渲染到WebGLRenderTarget,然后在从深度目标采样时解压缩为线性深度值。 This is fairly expensive and often unnecessary, since many environments support the WEBGL_depth_texture extension. 由于许多环境都支持WEBGL_depth_texture扩展,因此这是相当昂贵的并且通常是不必要的。

After attempting this method, I somehow ended up with this weird unwanted effect in which lines are all over the terrain: 在尝试了这种方法之后,我最终以某种奇怪的不需要的效果结束了,其中的线条遍布整个地形:

地形上的线

I have setup a small example below in which I have replicated the issue. 我在下面设置了一个小示例,在其中复制了问题。 I feel it's something very obvious that I'm simply glossing over. 我觉得很简单,我只是在掩饰一下。

I hope someone here is able to point out what I'm missing so that I can get the ambient occlusion working in a way that is a little bit more performant! 我希望这里的人能够指出我所缺少的东西,这样我就可以以一种性能更高的方式使环境光遮挡工作!

Many thanks in advance. 提前谢谢了。

 const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 0.1, 2000); const pivot = new THREE.Object3D(); pivot.add(camera); scene.add(pivot); camera.position.set(0, 250, 500); camera.lookAt(pivot.position); const renderer = new THREE.WebGLRenderer(); renderer.setSize(window.innerWidth, window.innerHeight); renderer.gammaInput = true; renderer.gammaOutput = true; renderer.gammaFactor = 2.2; let supportsExtension = false; if (renderer.extensions.get('WEBGL_depth_texture')) { supportsExtension = true; } document.body.appendChild(renderer.domElement); const createCube = () => { const geo = new THREE.BoxGeometry(500, 500, 500); const mat = new THREE.MeshBasicMaterial({ color: 0x00ff00 }); const obj = new THREE.Mesh(geo, mat); obj.position.y = -(obj.geometry.parameters.height / 2); scene.add(obj); } const createSphere = () => { const geo = new THREE.SphereGeometry(100, 12, 8); const mat = new THREE.MeshBasicMaterial({ color: 0xff00ff }); const obj = new THREE.Mesh(geo, mat); obj.position.y = obj.geometry.parameters.radius; scene.add(obj); } // Create objects createCube(); createSphere(); const composer = new THREE.EffectComposer(renderer); const target = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight ); target.texture.format = THREE.RGBFormat; target.texture.minFilter = THREE.NearestFilter; target.texture.magFilter = THREE.NearestFilter; target.texture.generateMipmaps = false; target.stencilBuffer = false; target.depthBuffer = true; target.depthTexture = new THREE.DepthTexture(); target.depthTexture.type = THREE.UnsignedShortType; function initPostProcessing() { composer.addPass(new THREE.RenderPass( scene, camera )); const pass = new THREE.ShaderPass({ uniforms: { "tDiffuse": { value: null }, "tDepth": { value: target.depthTexture }, "resolution": { value: new THREE.Vector2( 512, 512 ) }, "cameraNear": { value: 1 }, "cameraFar": { value: 100 }, "onlyAO": { value: 0 }, "aoClamp": { value: 0.5 }, "lumInfluence": { value: 0.5 } }, vertexShader: document.getElementById('vertexShader').textContent, fragmentShader: document.getElementById('fragmentShader').textContent, }); pass.material.precision = 'highp'; composer.addPass(pass); pass.uniforms.tDepth.value = target.depthTexture; pass.uniforms.cameraNear.value = camera.near; pass.uniforms.cameraFar.value = camera.far; composer.passes[composer.passes.length - 1].renderToScreen = true; } initPostProcessing(); const animate = () => { requestAnimationFrame( animate ); pivot.rotation.y += 0.01; renderer.render( scene, camera, target ); composer.render(); } animate(); 
 html, body { margin: 0; } canvas { display: block; width: 100%; height: 100%; } 
 <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.js"></script> <script src="https://cdn.rawgit.com/mrdoob/three.js/dev/examples/js/postprocessing/EffectComposer.js"></script> <script src="https://cdn.rawgit.com/mrdoob/three.js/dev/examples/js/postprocessing/RenderPass.js"></script> <script src="https://cdn.rawgit.com/mrdoob/three.js/dev/examples/js/postprocessing/ShaderPass.js"></script> <script src="https://cdn.rawgit.com/mrdoob/three.js/dev/examples/js/shaders/CopyShader.js"></script> <script id="vertexShader" type="x-shader/x-vertex"> varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 ); } </script> <script id="fragmentShader" type="x-shader/x-fragment"> uniform float cameraNear; uniform float cameraFar; uniform bool onlyAO; // use only ambient occlusion pass? uniform vec2 resolution; // texture width, height uniform float aoClamp; // depth clamp - reduces haloing at screen edges uniform float lumInfluence; // how much luminance affects occlusion uniform sampler2D tDiffuse; uniform highp sampler2D tDepth; varying vec2 vUv; // #define PI 3.14159265 #define DL 2.399963229728653 // PI * ( 3.0 - sqrt( 5.0 ) ) #define EULER 2.718281828459045 // user variables const int samples = 4; // ao sample count const float radius = 5.0; // ao radius const bool useNoise = false; // use noise instead of pattern for sample dithering const float noiseAmount = 0.0003; // dithering amount const float diffArea = 0.4; // self-shadowing reduction const float gDisplace = 0.4; // gauss bell center highp vec2 rand( const vec2 coord ) { highp vec2 noise; if ( useNoise ) { float nx = dot ( coord, vec2( 12.9898, 78.233 ) ); float ny = dot ( coord, vec2( 12.9898, 78.233 ) * 2.0 ); noise = clamp( fract ( 43758.5453 * sin( vec2( nx, ny ) ) ), 0.0, 1.0 ); } else { highp float ff = fract( 1.0 - coord.s * ( resolution.x / 2.0 ) ); highp float gg = fract( coord.t * ( resolution.y / 2.0 ) ); noise = vec2( 0.25, 0.75 ) * vec2( ff ) + vec2( 0.75, 0.25 ) * gg; } return ( noise * 2.0 - 1.0 ) * noiseAmount; } float readDepth( const in vec2 coord ) { float cameraFarPlusNear = cameraFar + cameraNear; float cameraFarMinusNear = cameraFar - cameraNear; float cameraCoef = 2.0 * cameraNear; return cameraCoef / ( cameraFarPlusNear - texture2D( tDepth, coord ).x * cameraFarMinusNear ); } float compareDepths( const in float depth1, const in float depth2, inout int far ) { float garea = 2.0; // gauss bell width float diff = ( depth1 - depth2 ) * 100.0; // depth difference (0-100) // reduce left bell width to avoid self-shadowing if ( diff < gDisplace ) { garea = diffArea; } else { far = 1; } float dd = diff - gDisplace; float gauss = pow( EULER, -2.0 * dd * dd / ( garea * garea ) ); return gauss; } float calcAO( float depth, float dw, float dh ) { float dd = radius - depth * radius; vec2 vv = vec2( dw, dh ); vec2 coord1 = vUv + dd * vv; vec2 coord2 = vUv - dd * vv; float temp1 = 0.0; float temp2 = 0.0; int far = 0; temp1 = compareDepths( depth, readDepth( coord1 ), far ); // DEPTH EXTRAPOLATION if ( far > 0 ) { temp2 = compareDepths( readDepth( coord2 ), depth, far ); temp1 += ( 1.0 - temp1 ) * temp2; } return temp1; } void main() { highp vec2 noise = rand( vUv ); float depth = readDepth( vUv ); float tt = clamp( depth, aoClamp, 1.0 ); float w = ( 1.0 / resolution.x ) / tt + ( noise.x * ( 1.0 - noise.x ) ); float h = ( 1.0 / resolution.y ) / tt + ( noise.y * ( 1.0 - noise.y ) ); float ao = 0.0; float dz = 1.0 / float( samples ); float z = 1.0 - dz / 2.0; float l = 0.0; for ( int i = 0; i <= samples; i ++ ) { float r = sqrt( 1.0 - z ); float pw = cos( l ) * r; float ph = sin( l ) * r; ao += calcAO( depth, pw * w, ph * h ); z = z - dz; l = l + DL; } ao /= float( samples ); ao = 1.0 - ao; vec3 color = texture2D( tDiffuse, vUv ).rgb; vec3 lumcoeff = vec3( 0.299, 0.587, 0.114 ); float lum = dot( color.rgb, lumcoeff ); vec3 luminance = vec3( lum ); vec3 final = vec3( color * mix( vec3( ao ), vec3( 1.0 ), luminance * lumInfluence ) ); // mix( color * ao, white, luminance ) float depth2 = readDepth(vUv); if ( onlyAO ) { final = vec3( mix( vec3( ao ), vec3( 1.0 ), luminance * lumInfluence ) ); // ambient occlusion only } // gl_FragColor = vec4( vec3( readDepth( vUv) ), 1.0 ); // Depth gl_FragColor = vec4( final, 1.0 ); } </script> 

I'd love to hear what is causing my Ambient Occlusion to not render properly! 我很想听听是什么原因导致环境光遮挡无法正常渲染!

If you are using a perspective camera and relying on the depth map for any purpose -- that includes SSAO and shadows -- be careful of your choice of camera.near and camera.far -- especially near . 如果您使用透视相机并且出于任何目的都依赖深度图(包括SSAO和阴影),请谨慎选择camera.nearcamera.far ,尤其是near ( That would be shadow.camera.near if you are dealing with shadows.) (如果要处理阴影, shadow.camera.near 。)

Push the near plane out as far as is reasonable for your use case. 根据您的使用情况,将近平面尽可能地推出。 You will achieve the best results if your scene is positioned near the front of the frustum. 如果您的场景位于视锥的前部,则将获得最佳效果。

three.js r.86 three.js r.86

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM