简体   繁体   English

Three.js 后处理:如何保持多通道的深度纹理?

[英]Three.js post-processing: How to keep depth texture for multiple passes?

I am rendering a scene using three.js that needs multiple post-processing passes.我正在使用需要多个后处理通道的 three.js 渲染场景。 Most of those steps need the depth buffer.这些步骤中的大多数都需要深度缓冲区。 My plan was to first render all visible objects to obtain the color and depth and then render all post-processing passes using 2 framebuffers which are alternating read and write framebuffers.我的计划是首先渲染所有可见对象以获得颜色和深度,然后使用 2 个交替读取和写入帧缓冲区的帧缓冲区渲染所有后处理通道。 The passes are just examples:通行证只是示例:

  1. Render Objects -> FB0渲染对象 -> FB0
  2. DistortionPass, taking FB0 as input -> FB1 DistortionPass,以 FB0 作为输入 -> FB1
  3. GodrayPass, taking FB1 as input -> FB0 GodrayPass,以 FB1 作为输入 -> FB0
  4. SSAOPass, taking FB0 as input -> screen SSAOPass,以 FB0 作为输入 -> 屏幕

The GodrayPass needs to read the depth of the first render pass, therefore I need to bind the depth texture and it should not be the one bound to FB0, otherwise it would lead to a feedback loop, since that shader is writing to FB0. GodrayPass 需要读取第一个渲染通道的深度,因此我需要绑定深度纹理,它不应该是绑定到 FB0 的纹理,否则会导致反馈循环,因为该着色器正在写入 FB0。

I think it would make sense to copy the depth into a separate texture after rendering the objects, so I can bind this texture whenever needed in a pass without worrying about a feedback loop.我认为在渲染对象后将深度复制到单独的纹理中是有意义的,因此我可以在需要时绑定此纹理,而无需担心反馈循环。

However,然而,

  • copyTexImage2D seem to not support copying from the depth buffer. copyTexImage2D 似乎不支持从深度缓冲区复制。
  • using a shader to pack the depth buffer after the first pass into an RGBA8 texture would require every pass to "unpack" the float again在第一次传递到 RGBA8 纹理之后使用着色器将深度缓冲区打包将需要每次传递都再次“解包”浮动
  • render all objects again using a shader that output the depth to the color buffer?使用 output 深度到颜色缓冲区的着色器再次渲染所有对象? Would require packing or precision loss again.将需要再次包装或精度损失。

What is the best practice here?这里的最佳做法是什么? Am I on the right path?我在正确的道路上吗?

I can make use of WebGL 2.0 (OpenGL ES 3.0), but would like to avoid using unpopular extensions.我可以使用 WebGL 2.0 (OpenGL ES 3.0),但想避免使用不受欢迎的扩展。

Three.js doesn't call them FB's, it calls them RenderTarget s. Three.js 不称它们为 FB,而是称它们为RenderTarget For webgl they are WebGLRenderTarget对于 webgl,它们是WebGLRenderTarget

https://threejs.org/docs/#api/en/renderers/WebGLRenderTargethttps://threejs.org/docs/#api/en/renderers/WebGLRenderTarget

This example show setting up a depth texture with a render target此示例显示使用渲染目标设置深度纹理

target = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight );
target.texture.format = THREE.RGBFormat;
target.texture.minFilter = THREE.NearestFilter;
target.texture.magFilter = THREE.NearestFilter;
target.texture.generateMipmaps = false;
target.stencilBuffer = false;
target.depthBuffer = true;
target.depthTexture = new THREE.DepthTexture();
target.depthTexture.format = THREE.DepthFormat;
target.depthTexture.type = THREE.UnsignedShortType;

And this article shows rendering to render targets. 这篇文章展示了渲染目标。 And this article shows rendering to render targets for the purposes of post processing using EffectComposer and Pass three.js objects. 本文展示了使用EffectComposerPass three.js 对象对渲染目标进行渲染以进行后期处理。

So, just make 3 render targets所以,只需制作 3 个渲染目标

RT0 = color + depth
RT1 = color
RT2 = color

Then setup Pass objects such that然后设置Pass对象,使得

Render Objects -> RT0(color+depth)
DistortionPass, taking RT0(color) as input -> RT1(color)
GodrayPass, taking RT1(color) + RT0(depth) as input -> RT2(color)
SSAOPass, taking RT2(color as input -> screen

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM