简体   繁体   English

如何在 SceneKit 中使用半透明纹理?

[英]How to work with semitransparent textures in SceneKit?

I want to render a hair with some semitransparent texture.我想用一些半透明的纹理渲染头发。 But I always see some artifacts like on video: https://drive.google.com/file/d/1ftl2XRIuuJFurCwndan0K4UMxzn_wvu_/view?usp=sharing It's just 1 OBJ model with some texture.但我总是在视频中看到一些伪像: https ://drive.google.com/file/d/1ftl2XRIuuJFurCwndan0K4UMxzn_wvu_/view?usp =sharing这只是 1 个带有一些纹理的 OBJ 模型。

Transparency mode - Dual Layer 
Double sided
Blend mode - Alpha 
+ shader with alphatest

文物样本

Complete test project: https://drive.google.com/file/d/16AHTXJ_1Rw4yBL6U-mFSUcnq_9mtb8wq/view完整的测试项目: https : //drive.google.com/file/d/16AHTXJ_1Rw4yBL6U-mFSUcnq_9mtb8wq/view

If I turn off Write Depth -> haircut looks incorrect.如果我关闭 Write Depth -> 理发看起来不正确。 BUT this artifacts is eliminated.但是这个工件被消除了。 How to do it right?怎么做才对?

PS If you know how to render it correctly in MetalKit/RealityKit or smth else - please answer on this question too. PS 如果您知道如何在 MetalKit/RealityKit 或其他中正确渲染它 - 请也回答这个问题。 Because I see this issues in RealityKit too.因为我也在 RealityKit 中看到了这个问题。

This is one of the most common issues in rendering with transparency.这是透明渲染中最常见的问题之一。 Many useful kinds of alpha blending are noncommutative: the order in which you draw things matters.许多有用的 alpha 混合是不可交换的:绘制事物的顺序很重要。

When drawing opaque surfaces, we use the z-buffer to resolve which fragment is frontmost on a pixel-by-pixel basis.在绘制不透明表面时,我们使用 z 缓冲区逐个像素地解析哪个片段最前面。 Enable depth write/read, draw your triangles, and let the closest fragment win.启用深度写入/读取,绘制三角形,让最近的片段获胜。 This works regardless of drawing order.无论绘制顺序如何,这都有效。

When the surfaces are translucent, we can't naively expect the z-buffer to automatically produce the correct result;当表面是半透明时,我们不能天真地期望 z-buffer 自动产生正确的结果; it can only hold one value per pixel at aa time.每个像素一次只能保存一个值。 If we enable depth write/read, and draw in an arbitrary order, we have a good chance of rejecting fragments that should have contributed to the picture.如果我们启用深度写入/读取,并以任意顺序绘制,我们很有可能拒绝对图片有贡献的片段。 That's the phenomenon illustrated in the left of your image above.这就是上图左侧所示的现象。

On the other hand, if we don't read the depth buffer, we have a high likelihood of incorrectly drawing on top of opaque geometry that's already been rendered, making translucent surfaces uncannily "float" in front of objects they should be occluded by.另一方面,如果我们不读取深度缓冲区,我们很可能会在已经渲染的不透明几何体上错误地绘制,从而使半透明表面不可思议地“漂浮”在它们应该被遮挡的对象前面。

We resolve these artifacts by first drawing opaque geometry with depth write/read enabled, then drawing translucent surfaces with depth write disabled.我们通过首先绘制启用深度写入/读取的不透明几何体,然后绘制禁用深度写入的半透明表面来解决这些伪影。 Crucially, though, unless you're using a more advanced technique like order-independent transparency (OIT, which is not a silver bullet), you must sort your geometry to get correct compositing.但至关重要的是,除非您使用更高级的技术,例如与顺序无关的透明度(OIT,这不是灵丹妙药),否则您必须对几何图形进行排序以获得正确的合成。 This, again, is because compositing is not generally commutative.这也是因为合成通常不是可交换的。

In 2017, SceneKit introduced "transparency modes" to make rendering translucent objects easier, especially convex objects, whose depth complexity tends to be low. 2017 年,SceneKit 引入了“透明模式”,使渲染半透明物体更容易,尤其是凸面物体,其深度复杂度往往较低。 Unfortunately, as mentioned at 50:07 in this video introducing the feature , individual polygons are not sorted when rendering, so transparency modes are not a complete solution.不幸的是,正如在介绍该功能的视频中 50:07所述,渲染时不会对单个多边形进行排序,因此透明模式不是一个完整的解决方案。

I suspect the situation is much the same with RealityKit.我怀疑 RealityKit 的情况大致相同。 Sorting polygons every time the camera moves is costly, and is not something you want to do for every translucent object in every scenario, so these engines don't tend to support it.每次相机移动时对多边形进行排序的成本很高,而且不是您想在每个场景中对每个半透明对象执行的操作,因此这些引擎不倾向于支持它。

One way to get perfect rendering in this tricky case is to (1) ensure your geometry is not self-intersecting (if it is, it will be impossible to sort it for correct compositing), (2) put each hair card in its own node (gross, I know), and (3) sort your geometry so it is rendered back-to-front using double-sided materials and the "single layer" transparency mode.在这种棘手的情况下获得完美渲染的一种方法是 (1) 确保您的几何图形不会自相交(如果是,则无法对其进行排序以进行正确的合成),(2) 将每张发卡放在自己的位置节点(粗略,我知道),以及 (3) 对几何进行排序,以便使用双面材料和“单层”透明度模式从后到前渲染它。 This sorting step will likely need to be done on the CPU, and the order in which to render the polygons can then be conveyed to SceneKit by setting the renderingOrder property of the nodes comprising the translucent object.此分选步骤可能需要在CPU上完成的,并在其中呈现的多边形的顺序可以然后通过设置被输送到SceneKit renderingOrder包含半透明的对象的节点的属性。

Alternatively, you can use the SCNNodeRendererDelegate API to intercept the drawing of the geometry and draw it yourself with Metal.或者,您可以使用SCNNodeRendererDelegate API 截取几何图形的绘制,并使用 Metal SCNNodeRendererDelegate绘制。 This will allow the choice of rendering with OIT and drawing more efficiently by using using one node to represent the whole mesh.这将允许选择使用 OIT 渲染并通过使用一个节点来表示整个网格来更有效地绘制。 You might even be able to move the sort step to the GPU through clever use of the SCNSceneRendererDelegate and SCNGeometrySource API s, but that's beyond the scope of this answer.您甚至可以通过巧妙地使用SCNSceneRendererDelegateSCNGeometrySource API将排序步骤移至 GPU,但这超出了本答案的范围。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM