简体   繁体   English

Kinect:从色彩空间到世界坐标

[英]Kinect: From Color Space to world coordinates

I am tracking a ball using the rgb data from kinect. 我正在使用kinect的rgb数据跟踪一个球。 After this I look up the corresponding depth data. 在此之后,我查找相应的深度数据。 Both of this is working splendid. 这两项工作都非常出色。 Now I want to have the actual x,y,z world coordinates (ie skeleton Space) instead of the x_screen, y_screen and depth values. 现在我想要实际的x,y,z世界坐标(即骨架空间)而不是x_screen,y_screen和深度值。 Unfortunately the methods given by the kinect sdk ( http://msdn.microsoft.com/en-us/library/hh973078.aspx ) don`t help me. 不幸的是,kinect sdk( http://msdn.microsoft.com/en-us/library/hh973078.aspx )提供的方法对我没有帮助。 Basically i need a function "NuiImageGetSkeletonCoordinatesFromColorPixel" but i does not exist. 基本上我需要一个函数“NuiImageGetSkeletonCoordinatesFromColorPixel”,但我不存在。 All the functions basically go in the opposite direction 所有功能基本上都是相反的

I know this can probably be done with openNI but i can not use it for other reasons. 我知道这可能是用openNI完成的,但由于其他原因我无法使用它。

Is there a function that does this for me or do i have to do the conversion myself? 是否有一个功能可以为我做这个或我必须自己做转换? If I have to do it myself, how would i do this? 如果我必须自己做,我该怎么做? I sketched up a little diagram http://i.imgur.com/ROBJW8Q.png - do you think this would work? 我勾画了一个小图http://i.imgur.com/ROBJW8Q.png - 你认为这会有用吗?

Check the CameraIntrinsics. 检查CameraIntrinsics。

typedef struct _CameraIntrinsics
{
    float FocalLengthX;
    float FocalLengthY;
    float PrincipalPointX;
    float PrincipalPointY;
    float RadialDistortionSecondOrder;
    float RadialDistortionFourthOrder;
    float RadialDistortionSixthOrder;
}   CameraIntrinsics;

You can get it from ICoordinateMapper::GetDepthCameraIntrinsics . 您可以从ICoordinateMapper::GetDepthCameraIntrinsics获取它。

Then, for every pixel (u,v,d) in depth space, you can get the coordinate in world space by doing this: 然后,对于深度空间中的每个像素(u,v,d) ,您可以通过执行以下操作获取世界空间中的坐标:

x = (u - principalPointX) / focalLengthX * d;
y = (v - principalPointY) / focalLengthY * d;
z = d;

For color space pixel, you need to first find its associated depth space pixel, which you should use ICoordinateMapper::MapCameraPointTodepthSpace . 对于颜色空间像素,您需要首先找到其关联的深度空间像素,您应该使用ICoordinateMapper::MapCameraPointTodepthSpace Since not all color pixel has its associated depth pixel (1920x1080 vs 512x424), you can't have the full-HD color point cloud. 由于并非所有颜色像素都具有其关联的深度像素(1920x1080与512x424),因此您无法拥有全高清色彩点云。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM