簡體   English   中英

ARKIT:使用 PanGesture 移動 Object(正確的方式)

[英]ARKIT: Move Object with PanGesture (the right way)

我一直在閱讀大量關於如何通過在屏幕上拖動來移動 object 的 StackOverflow 答案。 有些使用針對 .featurePoints 的命中測試,有些使用手勢轉換或只是跟蹤 object 的 lastPosition。但老實說.. 沒有一個像每個人期望的那樣工作。

針對 .featurePoints 的命中測試只是讓 object 跳來跳去,因為你在拖動手指時並不總是命中特征點。 我不明白為什么每個人都一直建議這樣做。

像這樣的解決方案有效: 使用 SceneKit 在 ARKit 中拖動 SCNNode

但是 object 並沒有真正跟隨你的手指,當你走幾步或改變 object 或相機的角度時......並嘗試移動 object.. x,z 都是倒置的......並且完全有意義要做到這一點。

我真的很想像 Apple Demo 一樣移動對象,但我看了 Apple 的代碼......非常奇怪和過於復雜,我什至無法理解。 他們將 object 如此漂亮地移動的技術甚至與網上每個人提出的建議都不接近。 https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality

必須有一種更簡單的方法來做到這一點。

有點遲到,但我知道解決這個問題也有一些問題。 最后,當我調用手勢識別器時,我想通過執行兩個單獨的命中測試來找到方法。

首先,我對我的3d對象執行命中測試以檢測我當前是否正在按某個對象(因為如果你沒有指定任何選項,你會得到按featurePoints,plane等的結果)。 我通過使用.categoryBitMask值來完成此SCNHitTestOption 請記住,您必須.categoryBitMask將正確的.categoryBitMask值分配給對象節點及其所有子節點,以.categoryBitMask測試起作用。 我聲明了一個我可以使用的枚舉:

enum BodyType : Int {
    case ObjectModel = 2;
}

通過我在這里發布的關於.categoryBitMask值的問題的回答可以明顯看出,重要的是要考慮分配你的位掩碼的值。

下面是我與UILongPressGestureRecognizer一起使用的UILongPressGestureRecognizer ,以便選擇我當前正在按下的對象:

guard let recognizerView = recognizer.view as? ARSCNView else { return }

let touch = recognizer.location(in: recognizerView)

let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }

之后,我進行了第二次擊中測試,以找到我正在按下的飛機。 如果您不想將對象移動到平面邊緣.existingPlaneUsingExtent則可以使用類型.existingPlaneUsingExtent如果要沿檢測到的平面無限移動對象,則可以使用.existingPlane

 var planeHit : ARHitTestResult!

 if recognizer.state == .changed {

     let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
     guard hitTestPlane.first != nil else { return }
     planeHit = hitTestPlane.first!
     modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)

 }else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{

     modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)

 }

當我嘗試使用ARAnchors時,我做了一個GitHub回購 如果你想在實踐中看到我的方法,你可以查看它,但是我並沒有讓其他人使用它的意圖,所以它還沒有完成。 此外,開發分支應支持具有更多childNodes的對象的某些功能。

編輯:==================================

為了澄清如果要使用.scn對象而不是常規幾何體,則需要在創建對象時迭代對象的所有子節點,並設置每個子節點的位掩碼,如下所示:

 let objectModelScene = SCNScene(named:
        "art.scnassets/object/object.scn")!
 let objectNode =  objectModelScene.rootNode.childNode(
        withName: "theNameOfTheParentNodeOfTheObject", recursively: true)
 objectNode.categoryBitMask = BodyType.ObjectModel.rawValue
 objectNode.enumerateChildNodes { (node, _) in
        node.categoryBitMask = BodyType.ObjectModel.rawValue
    }

然后,在獲得hitTestResult后的手勢識別器中

let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])

您需要找到父節點,否則您可能正在移動您剛剛按下的單個子節點。 通過遞歸向上搜索剛剛找到的節點的節點樹來執行此操作。

guard let objectNode = getParentNodeOf(hitTestResult.first?.node) else { return }

您在其中聲明getParentNode方法,如下所示

func getParentNodeOf(_ nodeFound: SCNNode?) -> SCNNode? { 
    if let node = nodeFound {
        if node.name == "theNameOfTheParentNodeOfTheObject" {
            return node
        } else if let parent = node.parent {
            return getParentNodeOf(parent)
        }
    }
    return nil
}

然后,您可以自由地對objectNode執行任何操作,因為它將是.scn對象的父節點,這意味着應用於它的任何轉換也將應用於子節點。

我在Claessons的回答中加入了一些我的想法。 拖動節點時我發現有些滯后。 我發現節點不能跟隨手指的運動。

為了使節點更順暢地移動,我添加了一個跟蹤當前正在移動的節點的變量,並將位置設置為觸摸的位置。

    var selectedNode: SCNNode?

另外,我設置了一個.categoryBitMask值來指定我想要編輯(移動)的節點的類別。 默認位掩碼值為1。

我們設置類別位掩碼的原因是為了區分不同類型的節點,並指定您想要選擇的節點(移動等)。

    enum CategoryBitMask: Int {
        case categoryToSelect = 2        // 010
        case otherCategoryToSelect = 4   // 100
        // you can add more bit masks below . . .
    }

然后,我在viewDidLoad()添加了一個UILongPressGestureRecognizer

        let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(longPressed))
        self.sceneView.addGestureRecognizer(longPressRecognizer)

以下是我用來檢測長按的UILongPressGestureRecognizer ,它啟動拖動節點。

首先,從recognizerView視圖獲取觸摸location

    @objc func longPressed(recognizer: UILongPressGestureRecognizer) {

       guard let recognizerView = recognizer.view as? ARSCNView else { return }
       let touch = recognizer.location(in: recognizerView)

檢測到長按時,以下代碼運行一次。

在這里,我們執行hitTest來選擇已觸摸的節點。 請注意,在這里,我們指定.categoryBitMask選項以僅選擇以下類別的節點: CategoryBitMask.categoryToSelect

       // Runs once when long press is detected.
       if recognizer.state == .began {
            // perform a hitTest
            let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: CategoryBitMask.categoryToSelect])

            guard let hitNode = hitTestResult.first?.node else { return }

            // Set hitNode as selected
            self.selectedNode = hitNode

以下代碼將定期運行,直到用戶釋放手指。 在這里,我們執行另一個hitTest來獲取您希望節點移動的平面。

        // Runs periodically after .began
        } else if recognizer.state == .changed {
            // make sure a node has been selected from .began
            guard let hitNode = self.selectedNode else { return }

            // perform a hitTest to obtain the plane 
            let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
            guard let hitPlane = hitTestPlane.first else { return }
            hitNode.position = SCNVector3(hitPlane.worldTransform.columns.3.x,
                                           hitNode.position.y,
                                           hitPlane.worldTransform.columns.3.z)

確保從屏幕上移除手指時取消選擇節點。

        // Runs when finger is removed from screen. Only once.
        } else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{

            guard let hitNode = self.selectedNode else { return }

            // Undo selection
            self.selectedNode = nil
        }
    }

簡短回答:要像Apple演示項目那樣獲得這種漂亮而流暢的拖動效果,您必須像在Apple演示項目(處理3D交互)中那樣進行。 另一方面,我同意你的看法,如果你第一次看到它,代碼可能會讓人感到困惑。 要計算放置在地板平面上的物體的正確運動並不容易 - 總是從每個位置或視角來計算。 這是一個復雜的代碼構造,正在做這種極好的拖動效果。 Apple在實現這一目標方面做得很好,但對我們來說並不容易。

完整答案:在噩夢中划分AR交互模板以幫助您的需要 - 但如果您投入足夠的時間,也應該工作。 如果你想從頭開始,基本上開始使用一個普通的swift ARKit / SceneKit Xcode模板(包含太空船的模板)。

您還需要Apple提供的整個AR Interaction Template Project。 (鏈接包含在SO問題中)在結束時,你應該能夠拖動一個名為VirtualObject的東西,它實際上是一個特殊的SCNNode。 另外,你會有一個很好的焦點廣場,它可以用於任何目的 - 比如最初放置物體或添加地板或牆壁。 (拖動效果的一些代碼和焦點方形用法是一種合並或鏈接在一起的 - 沒有焦點方塊這樣做實際上會更復雜)

入門:將以下文件從AR Interaction模板復制到空項目:

  • Utilities.swift(通常我將此文件命名為Extensions.swift,它包含一些必需的基本擴展)
  • FocusSquare.swift
  • FocusSquareSegment.swift
  • ThresholdPanGesture.swift
  • VirtualObject.swift
  • VirtualObjectLoader.swift
  • VirtualObjectARView.swift

將UIGestureRecognizerDelegate添加到ViewController類定義,如下所示:

class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {

在viewDidLoad之前的定義部分中將此代碼添加到ViewController.swift:

// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
    let bounds = sceneView.bounds
    return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true


// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?

/**
 The object that has been most recently intereacted with.
 The `selectedObject` can be moved at any time with the tap gesture.
 */
var selectedObject: VirtualObject?

/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
    didSet {
        guard trackedObject != nil else { return }
        selectedObject = trackedObject
    }
}

/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

在viewDidLoad中,在設置場景之前添加以下代碼:

// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self

// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

在ViewController.swift的最后添加以下代碼:

// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
@objc
func didPan(_ gesture: ThresholdPanGesture) {
    switch gesture.state {
    case .began:
        // Check for interaction with a new object.
        if let object = objectInteracting(with: gesture, in: sceneView) {
            trackedObject = object // as? VirtualObject
        }

    case .changed where gesture.isThresholdExceeded:
        guard let object = trackedObject else { return }
        let translation = gesture.translation(in: sceneView)

        let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))

        // The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
        currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)

        gesture.setTranslation(.zero, in: sceneView)

    case .changed:
        // Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
        break

    case .ended:
        // Update the object's anchor when the gesture ended.
        guard let existingTrackedObject = trackedObject else { break }
        addOrUpdateAnchor(for: existingTrackedObject)
        fallthrough

    default:
        // Clear the current position tracking.
        currentTrackingPosition = nil
        trackedObject = nil
    }
}

// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
    // If the anchor is not nil, remove it from the session.
    if let anchor = object.anchor {
        sceneView.session.remove(anchor: anchor)
    }

    // Create a new anchor with the object's current transform and add it to the session
    let newAnchor = ARAnchor(transform: object.simdWorldTransform)
    object.anchor = newAnchor
    sceneView.session.add(anchor: newAnchor)
}


private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
    for index in 0..<gesture.numberOfTouches {
        let touchLocation = gesture.location(ofTouch: index, in: view)

        // Look for an object directly under the `touchLocation`.
        if let object = virtualObject(at: touchLocation) {
            return object
        }
    }

    // As a last resort look for an object under the center of the touches.
    // return virtualObject(at: gesture.center(in: view))
    return virtualObject(at: (gesture.view?.center)!)
}


/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {

    // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
    let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
    // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
    // let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)

    return hitTestResults.lazy.compactMap { result in
        return VirtualObject.existingObjectContainingNode(result.node)
        }.first
}

/**
 If a drag gesture is in progress, update the tracked object's position by
 converting the 2D touch location on screen (`currentTrackingPosition`) to
 3D world space.
 This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
 allowing drag gestures to move virtual objects regardless of whether one
 drags a finger across the screen or moves the device through space.
 - Tag: updateObjectToCurrentTrackingPosition
 */
@objc
func updateObjectToCurrentTrackingPosition() {
    guard let object = trackedObject, let position = currentTrackingPosition else { return }
    translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}

/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
    guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
        let result = smartHitTest(screenPos,
                                  infinitePlane: infinitePlane,
                                  objectPosition: object.simdWorldPosition,
                                  allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }

    let planeAlignment: ARPlaneAnchor.Alignment
    if let planeAnchor = result.anchor as? ARPlaneAnchor {
        planeAlignment = planeAnchor.alignment
    } else if result.type == .estimatedHorizontalPlane {
        planeAlignment = .horizontal
    } else if result.type == .estimatedVerticalPlane {
        planeAlignment = .vertical
    } else {
        return
    }

    /*
     Plane hit test results are generally smooth. If we did *not* hit a plane,
     smooth the movement to prevent large jumps.
     */
    let transform = result.worldTransform
    let isOnPlane = result.anchor is ARPlaneAnchor
    object.setTransform(transform,
                        relativeTo: cameraTransform,
                        smoothMovement: !isOnPlane,
                        alignment: planeAlignment,
                        allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

添加一些Focus Square Code

// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
    if isObjectVisible {
        focusSquare.hide()
    } else {
        focusSquare.unhide()
    }

    // Perform hit testing only when ARKit tracking is in a good state.
    if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
        let result = smartHitTest(screenCenter) {
        DispatchQueue.main.async {
            self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
            self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
        }
    } else {
        DispatchQueue.main.async {
            self.focusSquare.state = .initializing
            self.sceneView.pointOfView?.addChildNode(self.focusSquare)
        }
    }
}

並添加一些控制功能:

func hideFocusSquare()  { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }  // to hide the focus square
func showFocusSquare()  { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square

從VirtualObjectARView.swift復制! 整個函數smartHitTest到ViewController.swift(所以它們存在兩次)

func smartHitTest(_ point: CGPoint,
                  infinitePlane: Bool = false,
                  objectPosition: float3? = nil,
                  allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {

    // Perform the hit test.
    let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])

    // 1. Check for a result on an existing plane using geometry.
    if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
        let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
        return existingPlaneUsingGeometryResult
    }

    if infinitePlane {

        // 2. Check for a result on an existing plane, assuming its dimensions are infinite.
        //    Loop through all hits against infinite existing planes and either return the
        //    nearest one (vertical planes) or return the nearest one which is within 5 cm
        //    of the object's position.
        let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)

        for infinitePlaneResult in infinitePlaneResults {
            if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
                if planeAnchor.alignment == .vertical {
                    // Return the first vertical plane hit test result.
                    return infinitePlaneResult
                } else {
                    // For horizontal planes we only want to return a hit test result
                    // if it is close to the current object's position.
                    if let objectY = objectPosition?.y {
                        let planeY = infinitePlaneResult.worldTransform.translation.y
                        if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
                            return infinitePlaneResult
                        }
                    } else {
                        return infinitePlaneResult
                    }
                }
            }
        }
    }

    // 3. As a final fallback, check for a result on estimated planes.
    let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
    let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
    switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
    case (true, false):
        return hResult
    case (false, true):
        // Allow fallback to horizontal because we assume that objects meant for vertical placement
        // (like a picture) can always be placed on a horizontal surface, too.
        return vResult ?? hResult
    case (true, true):
        if hResult != nil && vResult != nil {
            return hResult!.distance < vResult!.distance ? hResult! : vResult!
        } else {
            return hResult ?? vResult
        }
    default:
        return nil
    }
}

您可能會在復制的函數中看到有關hitTest的一些錯誤。 只需糾正它:

hitTest... // which gives an Error
sceneView.hitTest... // this should correct it

實現渲染器updateAtTime函數並添加以下行:

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
    // For the Focus Square
    if isFocusSquareEnabled { showFocusSquare() }

    self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}

最后為Focus Square添加一些輔助函數

func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }  // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square

此時,您可能仍會在導入的文件中看到大約十二個錯誤和警告,這可能會發生,當在Swift 5中執行此操作並且您有一些Swift 4文件時。 讓Xcode糾正錯誤。 (它關於重命名一些代碼語句,Xcode最了解)

進入VirtualObject.swift並搜索此代碼塊:

if smoothMovement {
    let hitTestResultDistance = simd_length(positionOffsetFromCamera)

    // Add the latest position and keep up to 10 recent distances to smooth with.
    recentVirtualObjectDistances.append(hitTestResultDistance)
    recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))

    let averageDistance = recentVirtualObjectDistances.average!
    let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
    simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
    simdPosition = cameraWorldPosition + positionOffsetFromCamera
}

通過以下一行代碼取消注釋或替換整個塊:

simdPosition = cameraWorldPosition + positionOffsetFromCamera

此時,您應該能夠編譯項目並在設備上運行它。 您應該看到太空飛船和一個應該已經有效的黃色焦點正方形。

要開始放置一個Object,你可以拖動你需要一些函數來創建一個所謂的VirtualObject,正如我在開頭所說的那樣。

使用此示例函數進行測試(將其添加到視圖控制器中的某個位置):

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {

    if focusSquare.state != .initializing {
        let position = SCNVector3(focusSquare.lastPosition!)

        // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
        let testObject = VirtualObject() // give it some name, when you dont have anything to load
        testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
        testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
        testObject.categoryBitMask = 0b00000010
        testObject.name = "test"
        testObject.castsShadow = true
        testObject.position = position

        sceneView.scene.rootNode.addChildNode(testObject)
    }
}

注意:您要在平面上拖動的所有內容都必須使用VirtualObject()而不是SCNNode()進行設置。 關於VirtualObject的其他所有內容與SCNNode保持一致

(您還可以添加一些常見的SCNNode擴展,例如按名稱加載場景的擴展名 - 在引用導入的模型時很有用)

玩得開心!

正如@ZAY 提到的那樣,Apple 使它變得非常混亂,此外他們使用的ARRaycastQuery僅適用於 iOS 13 及更高版本。 因此,我找到了一個解決方案,該解決方案通過使用當前相機方向來計算世界坐標中平面上的平移。

首先,通過使用此代碼段,我們能夠使用四元數獲取用戶面對的當前方向。

private func getOrientationYRadians()-> Float {
    guard let cameraNode = arSceneView.pointOfView else { return 0 }
    
    //Get camera orientation expressed as a quaternion
    let q = cameraNode.orientation
    
    //Calculate rotation around y-axis (heading) from quaternion and convert angle so that
    //0 is along -z-axis (forward in SceneKit) and positive angle is clockwise rotation.
    let alpha = Float.pi - atan2f( (2*q.y*q.w)-(2*q.x*q.z), 1-(2*pow(q.y,2))-(2*pow(q.z,2)) )

    // here I convert the angle to be 0 when the user is facing +z-axis 
    return alpha <= Float.pi ? abs(alpha - (Float.pi)) : (3*Float.pi) - alpha
}

手柄盤法

private var lastPanLocation2d: CGPoint!
@objc func handlePan(panGesture: UIPanGestureRecognizer) {
    let state = panGesture.state
    
    guard state != .failed && state != .cancelled else {
        return
    }
    
    let touchLocation = panGesture.location(in: self)
    
    if (state == .began) {
        lastPanLocation2d = touchLocation
    }
    
    // 200 here is a random value that controls the smoothness of the dragging effect
    let deltaX = Float(touchLocation.x - lastPanLocation2d!.x)/200
    let deltaY = Float(touchLocation.y - lastPanLocation2d!.y)/200
    
    let currentYOrientationRadians = getOrientationYRadians()
    // convert delta in the 2D dimensions to the 3d world space using the current rotation
    let deltaX3D = (deltaY*sin(currentYOrientationRadians))+(deltaX*cos(currentYOrientationRadians))
    let deltaY3D = (deltaY*cos(currentYOrientationRadians))+(-deltaX*sin(currentYOrientationRadians))
    
    // assuming that the node is currently positioned on a plane so the y-translation will be zero
    let translation = SCNVector3Make(deltaX3D, 0.0, deltaY3D)
    nodeToDrag.localTranslate(by: translation)
    
    lastPanLocation2d = touchLocation
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM