简体   繁体   English

Kinect v2将颜色坐标映射到相机空间

[英]Kinect v2 mapping color coordinates to camera space

I am trying to map coordinates from the color space to the camera space. 我正在尝试将色彩空间中的坐标映射到相机空间。 The code I am using is the following: 我使用的代码如下:

HRESULT ModelRecognizer::MapColorToCameraCoordinates(const std::vector<ColorSpacePoint>& colorsps, std::vector<CameraSpacePoint>& camerasps)
{
    //Access frame
    HRESULT hr = GetDepthFrame();

    if (SUCCEEDED(hr))
    {
        ICoordinateMapper* pMapper;
        hr = m_pKinectSensor->get_CoordinateMapper(&pMapper);
        if (SUCCEEDED(hr))
        {
            CameraSpacePoint* cameraSpacePoints = new CameraSpacePoint[cColorWidth * cColorHeight];
            hr = pMapper->MapColorFrameToCameraSpace(nDepthWidth * nDepthHeight, depthImageBuffer, cColorWidth * cColorHeight, cameraSpacePoints);
            if (SUCCEEDED(hr))
            {
                for (ColorSpacePoint colorsp : colorsps)
                {
                    long colorIndex = (long)(colorsp.Y * cColorWidth + colorsp.X);
                    CameraSpacePoint csp = cameraSpacePoints[colorIndex];
                    camerasps.push_back(csp);
                }
            }
            delete[] cameraSpacePoints;
        }
    }
    ReleaseDepthFrame();
    return hr;
}

I do not get any errors, however, the result seems to be rotated by 180 degrees and has an offset. 我没有得到任何错误,但是,结果似乎旋转了180度并且有一个偏移量。 Does anyone have suggestions what I am doing wrong? 有谁有人建议我做错了什么? Any help is appreciated. 任何帮助表示赞赏。

Just to give a bigger picture why I need this: 只是为了更好地了解我为什么需要这个:

I am tracking colored tape pasted on a table from the color image using open cv. 我正在使用open cv跟踪从彩色图像粘贴在桌面上的彩色胶带。 Then I create walls at the locations of the tape in a 3D mesh. 然后我在3D网格中的磁带位置创建墙。 Furthermore, I am using KinectFusion to generate a mesh of the other objects on the table. 此外,我正在使用KinectFusion生成表格中其他对象的网格。 However, when I open both meshes in Meshlab the misalignment can clearly be seen. 但是,当我在Meshlab中打开两个网格时,可以清楚地看到错位。 As I assume KinectFusion's mesh is created correctly in the CameraSpace and I create the mesh of the walls exactly at the CameraSpacePoints returned by the above function, I am pretty sure that the error lies in the CoordinateMapping procedure. 我假设在CameraSpace中正确创建了KinectFusion的网格,并且我在上面的函数返回的CameraSpacePoints中创建了墙的网格,我很确定错误在于CoordinateMapping过程。

Images showing the misalignment can be found at http://imgur.com/UsrEdZb,ZseN2br#0 , http://imgur.com/UsrEdZb,ZseN2br#1 显示错位的图像,可以发现http://imgur.com/UsrEdZb,ZseN2br#0http://imgur.com/UsrEdZb,ZseN2br#1

I finally figured it out: For whatever reason the returned CameraSpacePoints were mirrored at the origin in X and Y, however not in Z. If anyone has an explanation for this I am still interested. 我终于明白了:无论出于何种原因,返回的CameraSpacePoints在X和Y的原点被镜像,但不在Z.如果有人对此有解释,我仍然感兴趣。

It works with the following code now: 它现在使用以下代码:

/// <summary>
/// Maps coordinates from ColorSpace to CameraSpace
/// Expects that the Points in ColorSpace are mirrored at x (as Kinects returns it by default).
/// </summary>
HRESULT ModelRecognizer::MapColorToCameraCoordinates(const std::vector<ColorSpacePoint>& colorsps, std::vector<CameraSpacePoint>& camerasps)
{
    //Access frame
    HRESULT hr = GetDepthFrame();

    if (SUCCEEDED(hr))
    {
        ICoordinateMapper* pMapper;
        hr = m_pKinectSensor->get_CoordinateMapper(&pMapper);
        if (SUCCEEDED(hr))
        {
            CameraSpacePoint* cameraSpacePoints = new CameraSpacePoint[cColorWidth * cColorHeight];
            hr = pMapper->MapColorFrameToCameraSpace(nDepthWidth * nDepthHeight, depthImageBuffer, cColorWidth * cColorHeight, cameraSpacePoints);
            if (SUCCEEDED(hr))
            {
                for (ColorSpacePoint colorsp : colorsps)
                {
                    int colorX = static_cast<int>(colorsp.X + 0.5f);
                    int colorY = static_cast<int>(colorsp.Y + 0.5f);
                    long colorIndex = (long)(colorY * cColorWidth + colorX);
                    CameraSpacePoint csp = cameraSpacePoints[colorIndex];
                    camerasps.push_back(CameraSpacePoint{ -csp.X, -csp.Y, csp.Z });
                }
            }
            delete[] cameraSpacePoints;
        }
    }
    ReleaseDepthFrame();
    return hr;
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM