简体   繁体   English

Kinect:如何使用openni将深度坐标映射到世界坐标?

[英]Kinect: How to map depth coordinates to world coordinates using openni?

I need to map the depth map to the world coordinates using Openni. 我需要使用Openni将深度图映射到世界坐标。

So I can't use any of the MSDN code such as "MapDepthToSkeletonPoint" or "MapDepthToSkeletonPoint" or "NuiTransformDepthImageToSkeleton". 因此,我不能使用任何MSDN代码,例如“ MapDepthToSkeletonPoint”或“ MapDepthToSkeletonPoint”或“ NuiTransformDepthImageToSkeleton”。

I tried to use the following equations from the URL " http://www.tagwith.com/question_495583_kinect-from-color-space-to-world-coordinates " : 我尝试使用URL“ http://www.tagwith.com/question_495583_kinect-from-color-space-to-world-coordinates ”中的以下等式:

x = (u - principalPointX) / focalLengthX * d;
y = (z - principalPointY) / focalLengthY * d; 
z = d;

but I could not get neither principalPointX nor focalLengthX although I tried to use the method "getDepthCameraIntrinsics" but it gave me values of NaN. 但是虽然我尝试使用方法“ getDepthCameraIntrinsics”,但我既无法获得principalPointX,也无法获得focalLengthX,但是它给了我NaN的值。

I hope somebody can help with this transformation. 我希望有人可以帮助实现这一转变。

I found a solution for this problem: by getting the device handler through the OpenNIGrabber, then get the depth output mode then get the nXRes and nYRes just like the following code 我找到了解决此问题的方法:通过OpenNIGrabber获取设备处理程序,然后获取深度输出模式,然后获取nXRes和nYRes,就像下面的代码一样

depth_principal_x = ((pcl::OpenNIGrabber*)capture_)->getDevice()->getDepthOutputMode().nXRes;
depth_principal_y = ((pcl::OpenNIGrabber*)capture_)->getDevice()->getDepthOutputMode().nYRes;
focal_length= ((pcl::OpenNIGrabber*)capture_)->getDevice()->getDepthFocalLength();

Then use the following code using the assigned variables from the code above: 然后,使用上面代码中分配的变量使用以下代码:

x = (u - depth_principal_x) / focal_length * d;
y = (z - depth_principal_y) / focal_length * d; 
z = d;

I hope this will help 我希望这个能帮上忙

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM