简体   繁体   English

ARKit3 如何使用原深感摄像头进行人脸跟踪和其他人的人脸网格?

[英]ARKit3 How to use TrueDepth camera for face tracking and face meshes of other people?

I'm interested in working with ARKit 3 and a couple of iPads to create a multi-user (collaborative) experience, as support for collaborative AR seems to have improved according to WWDC '19.我对使用 ARKit 3 和几台 iPad 来创建多用户(协作)体验很感兴趣,因为根据 WWDC '19,对协作 AR 的支持似乎有所改善。

Apple talks a lot about face tracking and motion capture, but it sounds like this is only supported on the front facing camera (facing the person holding the device.) Is there no way to do face tracking of your friends who are sharing the experience? Apple 讲了很多人脸追踪和动作捕捉,但听起来好像只有前置摄像头才支持(面向拿着设备的人)。有没有办法对正在分享体验的朋友进行面部追踪? In the WWDC demo video, it looks like the motion capture character is being generated from a person in the user's view, and the Minecraft demo shows people in the user's view being mixed with Minecraft content in AR.在 WWDC 演示视频中,看起来动作捕捉角色是从用户视图中的一个人生成的,而 Minecraft 演示显示用户视图中的人与 AR 中的 Minecraft 内容混合。 This suggests that the back camera is handling this.这表明后置摄像头正在处理这个问题。 Yet, I thought the point of AR was to attach virtual objects to the physical world in front of you .然而,我认为 AR 的重点是将虚拟对象附加到您面前的物理世界。 Reality Composer has an example with face tracking and a quote bubble that would follow the face around, but because I do not have a device with a depth camera, I do not know if the example is meant to have that quote bubble follow you, the user around, or someone else in the camera's view. Reality Composer 有一个带有面部跟踪和引用气泡的示例,该示例会跟随脸部,但是因为我没有带深度相机的设备,所以我不知道该示例是否旨在让引用气泡跟随您,周围的用户,或摄像机视野中的其他人。

In short, I'm a little confused about what sorts of things I can do with face tracking, people occlusion, and body tracking with respect to other people in a shared AR environment.简而言之,对于共享 AR 环境中的其他人,我可以用面部跟踪、人物遮挡和身体跟踪做哪些事情,我有点困惑。 Which camera are in use, and which features can I apply to other people as opposed to just myself (selfie style)?正在使用哪种相机,我可以将哪些功能应用于其他人,而不仅仅是我自己(自拍风格)?

Lastly, assuming that I CAN do face and body tracking of other people in my view, and that I can do occlusion for other people, would someone direct me to some example code?最后,假设我可以在我看来对其他人进行面部和身体跟踪,并且我可以为其他人进行遮挡,有人会指导我一些示例代码吗? I'd also like to use the depth information from the scene (again, if that's possible), but maybe this requires some completely different API.我还想使用场景中的深度信息(同样,如果可能的话),但这可能需要一些完全不同的 API。

Since I don't yet have a device with a TrueDepth camera, I can't really test this myself using the example project here: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces I am trying to determine based on people's answers whether I can create the system I want in the first place before purchasing the necessary hardware.由于我还没有配备 TrueDepth 相机的设备,因此我无法使用此处的示例项目自己进行测试: https : //developer.apple.com/documentation/arkit/tracking_and_visualizing_faces我正在尝试根据人们的答案是我是否可以在购买必要的硬件之前首先创建我想要的系统。

ARKit 3 provides the ability to use both front and back cameras at the same time. ARKit 3 提供了同时使用前置和后置摄像头的能力。

Face tracking uses the front camera and requires a device with a TrueDepth camera.人脸追踪使用前置摄像头,需要配备原深感摄像头的设备。 ARKit 3 can now track up to three faces with the front camera. ARKit 3 现在可以使用前置摄像头跟踪最多三张脸。 Face tracking allows you to capture detailed facial movements.面部跟踪可让您捕捉详细的面部动作。

Body tracking and motion capture is performed with he rear camera.使用后置摄像头执行身体跟踪和动作捕捉。 This allows a body to be detected and mapped onto a virtual skeleton that your app can use to capture position data.这允许检测身体并将其映射到您的应用程序可用于捕获位置数据的虚拟骨架上。

For example, you could capture body motion of someone using the rear camera and the facial expression of the person watching that motion using the front camera and combine that in the one ARKit scene.例如,您可以使用后置摄像头捕捉某人的身体动作,以及使用前置摄像头观看该动作的人的面部表情,并将其组合到一个 ARKit 场景中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM