简体   繁体   English

SceneKit金属深度缓冲区

[英]SceneKit Metal Depth Buffer

I'm attempting to write an augmented reality app using SceneKit, and I need accurate 3D points from the current rendered frame, given a 2D pixel and depth using SCNSceneRenderer's unprojectPoint method. 我正在尝试使用SceneKit编写增强现实应用程序,我需要使用SCNSceneRenderer的unprojectPoint方法给出2D像素和深度,从当前渲染帧中获得精确的3D点。 This requires an x, y, and z where the x and y is a pixel coordinate and normally the z is a value read from the depth buffer at that frame. 这需要x,y和z,其中x和y是像素坐标,并且通常z是从该帧处的深度缓冲器读取的值。

The SCNView's delegate has this method to render the depth frame: SCNView的委托有这种方法来渲染深度框架:

func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
    renderDepthFrame()
} 

func renderDepthFrame(){

    // setup our viewport
    let viewport: CGRect = CGRect(x: 0, y: 0, width: Double(SettingsModel.model.width), height: Double(SettingsModel.model.height))

    // depth pass descriptor
    let renderPassDescriptor = MTLRenderPassDescriptor()

    let depthDescriptor: MTLTextureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.depth32Float, width: Int(SettingsModel.model.width), height: Int(SettingsModel.model.height), mipmapped: false)
    let depthTex = scnView!.device!.makeTexture(descriptor: depthDescriptor)
    depthTex.label = "Depth Texture"
    renderPassDescriptor.depthAttachment.texture = depthTex
    renderPassDescriptor.depthAttachment.loadAction = .clear
    renderPassDescriptor.depthAttachment.clearDepth = 1.0
    renderPassDescriptor.depthAttachment.storeAction = .store



    let commandBuffer = commandQueue.makeCommandBuffer()

    scnRenderer.scene = scene
    scnRenderer.pointOfView = scnView.pointOfView!

    scnRenderer!.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)


    // setup our depth buffer so the cpu can access it
    let depthImageBuffer: MTLBuffer = scnView!.device!.makeBuffer(length: depthTex.width * depthTex.height*4, options: .storageModeShared)
    depthImageBuffer.label   = "Depth Buffer"
    let blitCommandEncoder: MTLBlitCommandEncoder = commandBuffer.makeBlitCommandEncoder()
    blitCommandEncoder.copy(from: renderPassDescriptor.depthAttachment.texture!, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOriginMake(0, 0, 0), sourceSize: MTLSizeMake(Int(SettingsModel.model.width), Int(SettingsModel.model.height), 1), to: depthImageBuffer, destinationOffset: 0, destinationBytesPerRow: 4*Int(SettingsModel.model.width), destinationBytesPerImage: 4*Int(SettingsModel.model.width)*Int(SettingsModel.model.height))
    blitCommandEncoder.endEncoding()

    commandBuffer.addCompletedHandler({(buffer) -> Void in
        let rawPointer: UnsafeMutableRawPointer = UnsafeMutableRawPointer(mutating: depthImageBuffer.contents())
        let typedPointer: UnsafeMutablePointer<Float> = rawPointer.assumingMemoryBound(to: Float.self)
        self.currentMap = Array(UnsafeBufferPointer(start: typedPointer, count: Int(SettingsModel.model.width)*Int(SettingsModel.model.height)))

    })

    commandBuffer.commit()

}

This works. 这有效。 I get depth values between 0 and 1. The problem is that I can't use them in the unprojectPoint because they don't appear to be scaled the same as the initial pass, despite using the same SCNScene and SCNCamera. 我得到0到1之间的深度值。问题是我不能在unprojectPoint中使用它们,因为尽管使用相同的SCNScene和SCNCamera,它们看起来并不像初始传递那样缩放。

My questions: 我的问题:

  1. Is there any way to get the depth values directly from SceneKit SCNView's main pass without having to do an extra pass with a separate SCNRenderer? 有没有办法直接从SceneKit SCNView的主要传递获取深度值,而无需使用单独的SCNRenderer进行额外的传递?

  2. Why don't the depth values in my pass match the values I get from doing a hit test and then unprojecting? 为什么我的传递中的深度值与执行命中测试然后取消投影时得到的值不匹配? The depth values from my pass go from 0.78 to 0.94. 我传球的深度值从0.78到0.94。 The depth values in the hit test range from 0.89 to 0.97, which curiously enough, matches the OpenGL depth values of the scene when I rendered it in Python. 命中测试中的深度值范围从0.89到0.97(奇怪的是,当我在Python中渲染时,它与场景的OpenGL深度值匹配)。

My hunch is this is a difference in viewports and SceneKit is doing something to scale the depth values from -1 to 1 just like OpenGL. 我的预感是这是视口中的差异,而SceneKit正在做一些事情来将深度值从-1缩放到1,就像OpenGL一样。

EDIT: And in case you're wondering, I can't use the hitTest method directly. 编辑:如果你想知道,我不能直接使用hitTest方法。 It's too slow for what I'm trying to achieve. 这对我想要实现的目标来说太慢了。

As a workaround, I switched to OpenGL ES and read the depth buffer by adding a fragment shader that packs the depth value into the RGBA renderbuffer SCNShadable. 作为一种解决方法,我切换到OpenGL ES并通过添加片段着色器来读取深度缓冲区,该着色器着色器将深度值打包到RGBA渲染缓冲区SCNShadable中。

See here for more info: http://concord-consortium.github.io/lab/experiments/webgl-gpgpu/webgl.html 有关详细信息,请参阅此处: http//concord-consortium.github.io/lab/experiments/webgl-gpgpu/webgl.html

I understand this is a valid approach as it is used in shadow mapping quite often on OpenGL ES devices and WebGL, but this feels hacky to me and I shouldn't have to do this. 我知道这是一种有效的方法,因为它经常在OpenGL ES设备和WebGL上用于阴影贴图,但这对我来说感觉很糟糕,我不应该这样做。 I would still be interested in another answer if someone can figure out Metal's viewport transformation. 如果有人能够弄清楚Metal的视口转换,我仍然会对另一个答案感兴趣。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM