简体   繁体   English

如何在 AR 场景中在运行时提取 SceneKit 深度缓冲区?

[英]How to Extract SceneKit Depth Buffer at runtime in AR scene?

How does one extract the SceneKit depth buffer?如何提取 SceneKit 深度缓冲区? I make an AR based app that is running Metal and I'm really struggling to find any info on how to extract a 2D depth buffer so I can render out fancy 3D photos of my scenes.我制作了一个运行 Metal 的基于 AR 的应用程序,我真的很难找到有关如何提取 2D 深度缓冲区的任何信息,以便我可以渲染出场景的精美 3D 照片。 Any help greatly appreciated.非常感谢任何帮助。

Your question is unclear but I'll try to answer.你的问题不清楚,但我会尽力回答。

Depth pass from VR view从 VR 视图深度传递

If you need to render a Depth pass from SceneKit's 3D environment then you should use, for instance, a SCNGeometrySource.Semantic structure.如果您需要从 SceneKit 的 3D 环境渲染深度通道,那么您应该使用例如SCNGeometrySource.Semantic结构。 There are vertex , normal , texcoord , color and tangent type properties.vertexnormaltexcoordcolortangent类型属性。 Let's see what a vertex type property is:让我们看看什么是vertex类型属性:

static let vertex: SCNGeometrySource.Semantic

This semantic identifies data containing the positions of each vertex in the geometry.此语义标识包含几何中每个顶点位置的数据。 For a custom shader program, you use this semantic to bind SceneKit's vertex position data to an input attribute of the shader.对于自定义着色器程序,您可以使用此语义将 SceneKit 的顶点位置数据绑定到着色器的输入属性。 Vertex position data is typically an array of three- or four-component vectors.顶点位置数据通常是由三分量或四分量向量组成的数组。

Here's a code's excerpt from iOS Depth Sample project.这是iOS 深度示例项目的代码摘录。

UPDATED: Using this code you can get a position for every point in SCNScene and assign a color for these points (this is what a zDepth channel really is):更新:使用此代码,您可以获得 SCNScene 中每个点的位置并为这些点指定颜色(这就是 zDepth 通道的真正含义):

import SceneKit

struct PointCloudVertex {
    var x: Float, y: Float, z: Float
    var r: Float, g: Float, b: Float
}

@objc class PointCloud: NSObject {

    var pointCloud : [SCNVector3] = []
    var colors: [UInt8] = []

    public func pointCloudNode() -> SCNNode {
        let points = self.pointCloud
        var vertices = Array(repeating: PointCloudVertex(x: 0,
                                                         y: 0,
                                                         z: 0,
                                                         r: 0,
                                                         g: 0,
                                                         b: 0), 
                                                     count: points.count)

        for i in 0...(points.count-1) {
            let p = points[i]
            vertices[i].x = Float(p.x)
            vertices[i].y = Float(p.y)
            vertices[i].z = Float(p.z)
            vertices[i].r = Float(colors[i * 4]) / 255.0
            vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
            vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
        }

        let node = buildNode(points: vertices)
        return node
    }

    private func buildNode(points: [PointCloudVertex]) -> SCNNode {
        let vertexData = NSData(
            bytes: points,
            length: MemoryLayout<PointCloudVertex>.size * points.count
        )
        let positionSource = SCNGeometrySource(
            data: vertexData as Data,
            semantic: SCNGeometrySource.Semantic.vertex,
            vectorCount: points.count,
            usesFloatComponents: true,
            componentsPerVector: 3,
            bytesPerComponent: MemoryLayout<Float>.size,
            dataOffset: 0,
            dataStride: MemoryLayout<PointCloudVertex>.size
        )
        let colorSource = SCNGeometrySource(
            data: vertexData as Data,
            semantic: SCNGeometrySource.Semantic.color,
            vectorCount: points.count,
            usesFloatComponents: true,
            componentsPerVector: 3,
            bytesPerComponent: MemoryLayout<Float>.size,
            dataOffset: MemoryLayout<Float>.size * 3,
            dataStride: MemoryLayout<PointCloudVertex>.size
        )
        let element = SCNGeometryElement(
            data: nil,
            primitiveType: .point,
            primitiveCount: points.count,
            bytesPerIndex: MemoryLayout<Int>.size
        )

        element.pointSize = 1
        element.minimumPointScreenSpaceRadius = 1
        element.maximumPointScreenSpaceRadius = 5

        let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])

        return SCNNode(geometry: pointsGeometry)
    }
}

Depth pass from AR view来自 AR 视图的深度传递

If you need to render a Depth pass from ARSCNView it is possible only in case you're using ARFaceTrackingConfiguration for the front-facing camera.如果您需要从 ARSCNView 渲染深度通道,只有在您为前置摄像头使用ARFaceTrackingConfiguration 的情况下才有可能。 If so, then you can employ capturedDepthData instance property that brings you a depth map, captured along with the video frame.如果是这样,那么您可以使用captureDepthData实例属性为您带来深度图,与视频帧一起捕获。

var capturedDepthData: AVDepthData? { get }

But this depth map image is only 15 fps and of lower resolution than corresponding RGB image at 60 fps .但是这个深度图图像only 15 fps and of lower resolution低于 60 fps 的相应 RGB 图像

Face-based AR uses the front-facing, depth-sensing camera on compatible devices.基于面部的 AR 使用兼容设备上的前置深度感应摄像头。 When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage ) captured by the color camera.当运行这样的配置时,会话提供的帧除了包含由彩色相机捕获的彩色像素缓冲区(参见captureImage )之外,还包含由深度相机捕获的深度图。 This property's value is always nil when running other AR configurations.运行其他 AR 配置时,此属性的值始终为零。

And a real code could be like this:真正的代码可能是这样的:

extension ViewController: ARSCNViewDelegate {

    func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {

        DispatchQueue.global().async {

            guard let frame = self.sceneView.session.currentFrame else {
                return
            }
            if let depthImage = frame.capturedDepthData {
                self.depthImage = (depthImage as! CVImageBuffer)
            }
        }
    }
}

Depth pass from Video view视频视图中的深度传递

Also, you can extract a true Depth pass using 2 back-facing cameras and AVFoundation framework.此外,您可以使用 2 个后置摄像头和AVFoundation框架提取真正的深度通道。

Look at Image Depth Map tutorial where Disparity concept will be introduced to you.查看图像深度图教程,其中将向您介绍视差概念。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM