簡體   English   中英

如何正確使用iOS(Swift)SceneKit SCNSceneRenderer unprojectPoint

[英]How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly

我正在使用iOS上的SceneKit開發一些代碼,在我的代碼中,我想確定全局z平面上的x和y坐標,其中z為0.0,x和y是通過點擊手勢確定的。 我的設置如下:

    override func viewDidLoad() {
    super.viewDidLoad()

    // create a new scene
    let scene = SCNScene()

    // create and add a camera to the scene
    let cameraNode = SCNNode()
    let camera = SCNCamera()
    cameraNode.camera = camera
    scene.rootNode.addChildNode(cameraNode)
    // place the camera
    cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)

    // create and add an ambient light to the scene
    let ambientLightNode = SCNNode()
    ambientLightNode.light = SCNLight()
    ambientLightNode.light.type = SCNLightTypeAmbient
    ambientLightNode.light.color = UIColor.darkGrayColor()
    scene.rootNode.addChildNode(ambientLightNode)

    let triangleNode = SCNNode()
    triangleNode.geometry = defineTriangle();
    scene.rootNode.addChildNode(triangleNode)

    // retrieve the SCNView
    let scnView = self.view as SCNView

    // set the scene to the view
    scnView.scene = scene

    // configure the view
    scnView.backgroundColor = UIColor.blackColor()
    // add a tap gesture recognizer
    let tapGesture = UITapGestureRecognizer(target: self, action: "handleTap:")
    let gestureRecognizers = NSMutableArray()
    gestureRecognizers.addObject(tapGesture)
    scnView.gestureRecognizers = gestureRecognizers
}

func handleTap(gestureRecognize: UIGestureRecognizer) {
    // retrieve the SCNView
    let scnView = self.view as SCNView
    // check what nodes are tapped
    let p = gestureRecognize.locationInView(scnView)
    // get the camera
    var camera = scnView.pointOfView.camera

    // screenZ is percentage between z near and far
    var screenZ = Float((15.0 - camera.zNear) / (camera.zFar - camera.zNear))
    var scenePoint = scnView.unprojectPoint(SCNVector3Make(Float(p.x), Float(p.y), screenZ))
    println("tapPoint: (\(p.x), \(p.y)) scenePoint: (\(scenePoint.x), \(scenePoint.y), \(scenePoint.z))")
}

func defineTriangle() -> SCNGeometry {

    // Vertices
    var vertices:[SCNVector3] = [
        SCNVector3Make(-2.0, -2.0, 0.0),
        SCNVector3Make(2.0, -2.0, 0.0),
        SCNVector3Make(0.0, 2.0, 0.0)
    ]

    let vertexData = NSData(bytes: vertices, length: vertices.count * sizeof(SCNVector3))
    var vertexSource = SCNGeometrySource(data: vertexData,
        semantic: SCNGeometrySourceSemanticVertex,
        vectorCount: vertices.count,
        floatComponents: true,
        componentsPerVector: 3,
        bytesPerComponent: sizeof(Float),
        dataOffset: 0,
        dataStride: sizeof(SCNVector3))

    // Normals
    var normals:[SCNVector3] = [
        SCNVector3Make(0.0, 0.0, 1.0),
        SCNVector3Make(0.0, 0.0, 1.0),
        SCNVector3Make(0.0, 0.0, 1.0)
    ]

    let normalData = NSData(bytes: normals, length: normals.count * sizeof(SCNVector3))
    var normalSource = SCNGeometrySource(data: normalData,
        semantic: SCNGeometrySourceSemanticNormal,
        vectorCount: normals.count,
        floatComponents: true,
        componentsPerVector: 3,
        bytesPerComponent: sizeof(Float),
        dataOffset: 0,
        dataStride: sizeof(SCNVector3))

    // Indexes
    var indices:[CInt] = [0, 1, 2]
    var indexData  = NSData(bytes: indices, length: sizeof(CInt) * indices.count)
    var indexElement = SCNGeometryElement(
        data: indexData,
        primitiveType: .Triangles,
        primitiveCount: 1,
        bytesPerIndex: sizeof(CInt)
    )

    var geo = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])

    // material
    var material = SCNMaterial()
    material.diffuse.contents  = UIColor.redColor()
    material.doubleSided = true
    material.shininess = 1.0;
    geo.materials = [material];

    return geo
}

如你看到的。 我有一個4個單位高,4個單位寬的三角形,並設置在以x,y(0.0,0.0)為中心的z平面(z = 0)上。 相機是默認的SCNCamera,它在負z方向看,我把它放在(0,0,15)。 zNear和zFar的默認值分別為1.0和100.0。 在我的handleTap方法中,我采用了點擊的x和y屏幕坐標,並嘗試找到x和y全局場景坐標,其中z = 0.0。 我正在使用unprojectPoint的調用。

unprojectPoint的文檔表明

取消投影z坐標為0.0的點將返回近剪裁平面上的點; 取消投影z坐標為1.0的點將返回遠剪裁平面上的點。

雖然沒有具體說明在兩點之間的點與近平面和遠平面之間存在線性關系,但我已經做出了這個假設,並將screenZ的值計算為近平面和遠平面之間的百分比距離z = 0飛機位於。 為了檢查我的答案,我可以點擊三角形的角落附近,因為我知道它們在全局坐標中的位置。

我的問題是我沒有得到正確的值,當我開始更改相機上的zNear和zFar剪裁平面時,我沒有獲得一致的值。 所以我的問題是,我該怎么辦呢? 最后,我將創建一個新的幾何體並將其放置在與用戶單擊的位置對應的z平面上。

在此先感謝您的幫助。

3D圖形管線中的典型深度緩沖區不是線性的 透視分割導致標准化設備坐標中的深度處於不同的比例 另見這里 。)

因此,您輸入unprojectPoint的z坐標實際上並不是您想要的坐標。

那么,如何找到與世界空間中的平面匹配的標准化深度坐標? 嗯,如果那架飛機與攝像機正交,那將有所幫助。 然后你需要做的就是在那個平面上投射一個點:

let projectedOrigin = gameView.projectPoint(SCNVector3Zero)

現在,您可以在3D視圖+標准化深度空間中獲得世界原點的位置。 要將2D視圖空間中的其他點映射到此平面,請使用此向量中的z坐標:

let vp = gestureRecognizer.locationInView(scnView)
let vpWithZ = SCNVector3(x: vp.x, y: vp.y, z: projectedOrigin.z)
let worldPoint = gameView.unprojectPoint(vpWithZ)

這讓你在世界空間中的一個點的點擊/抽頭位置映射到z = 0平面,適合的position ,如果你想顯示的位置給用戶一個節點。

(請注意,只有在映射到垂直於攝像機視圖方向的平面上時,此方法才有效。如果要將視圖坐標映射到不同方向的曲面,則vpWithZ的標准化深度值將不會不變。)

經過一些實驗,我們開發的是將觸摸點投射到場景中給定點的任意深度。

您需要的修改是計算Z = 0平面與此線的交點,這將是您的觀點。

private func touchPointToScenePoint(recognizer: UIGestureRecognizer) -> SCNVector3 {
    // Get touch point
    let touchPoint = recognizer.locationInView(sceneView)

    // Compute near & far points
    let nearVector = SCNVector3(x: Float(touchPoint.x), y: Float(touchPoint.y), z: 0)
    let nearScenePoint = sceneView.unprojectPoint(nearVector)
    let farVector = SCNVector3(x: Float(touchPoint.x), y: Float(touchPoint.y), z: 1)
    let farScenePoint = sceneView.unprojectPoint(farVector)

    // Compute view vector
    let viewVector = SCNVector3(x: Float(farScenePoint.x - nearScenePoint.x), y: Float(farScenePoint.y - nearScenePoint.y), z: Float(farScenePoint.z - nearScenePoint.z))

    // Normalize view vector
    let vectorLength = sqrt(viewVector.x*viewVector.x + viewVector.y*viewVector.y + viewVector.z*viewVector.z)
    let normalizedViewVector = SCNVector3(x: viewVector.x/vectorLength, y: viewVector.y/vectorLength, z: viewVector.z/vectorLength)

    // Scale normalized vector to find scene point
    let scale = Float(15)
    let scenePoint = SCNVector3(x: normalizedViewVector.x*scale, y: normalizedViewVector.y*scale, z: normalizedViewVector.z*scale)

    print("2D point: \(touchPoint). 3D point: \(nearScenePoint). Far point: \(farScenePoint). scene point: \(scenePoint)")

    // Return <scenePoint>
    return scenePoint
}

這是我在3d Space中獲得精確點的解決方案。

// If the object is in 0,0,0 point you can use
float zDepth = [self projectPoint:SCNVector3Zero].z;

// or myNode.position.
//zDepth = [self projectPoint:myNode.position].z;

NSLog(@"2D point: X %f, Y: %f, zDepth: %f", click.x, click.y, zDepth);

SCNVector3 worldPoint = [self unprojectPoint:SCNVector3Make(click.x, click.y,  zDepth)];

SCNVector3 nearVec = SCNVector3Make(click.x, click.y, 0.0);
SCNVector3 nearPoint = [self unprojectPoint:nearVec];

SCNVector3 farVec = SCNVector3Make(click.x, click.y, 1.0);
SCNVector3 farPoint = [self unprojectPoint:farVec];

float z_magnitude = fabs(farPoint.z - nearPoint.z);
float near_pt_factor = fabs(nearPoint.z) / z_magnitude;
float far_pt_factor = fabs(farPoint.z) / z_magnitude;

GLKVector3 nearP = GLKVector3Make(nearPoint.x, nearPoint.y, nearPoint.z);
GLKVector3 farP = GLKVector3Make(farPoint.x, farPoint.y, farPoint.z);

GLKVector3 final_pt = GLKVector3Add(GLKVector3MultiplyScalar(nearP, far_pt_factor), GLKVector3MultiplyScalar(farP, near_pt_factor));

NSLog(@"3D world point = %f, %f, %f", final_pt.x, final_pt.y, worldPoint.z);

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM