簡體   English   中英

Kinect v2將顏色坐標映射到相機空間

[英]Kinect v2 mapping color coordinates to camera space

我正在嘗試將色彩空間中的坐標映射到相機空間。 我使用的代碼如下:

HRESULT ModelRecognizer::MapColorToCameraCoordinates(const std::vector<ColorSpacePoint>& colorsps, std::vector<CameraSpacePoint>& camerasps)
{
    //Access frame
    HRESULT hr = GetDepthFrame();

    if (SUCCEEDED(hr))
    {
        ICoordinateMapper* pMapper;
        hr = m_pKinectSensor->get_CoordinateMapper(&pMapper);
        if (SUCCEEDED(hr))
        {
            CameraSpacePoint* cameraSpacePoints = new CameraSpacePoint[cColorWidth * cColorHeight];
            hr = pMapper->MapColorFrameToCameraSpace(nDepthWidth * nDepthHeight, depthImageBuffer, cColorWidth * cColorHeight, cameraSpacePoints);
            if (SUCCEEDED(hr))
            {
                for (ColorSpacePoint colorsp : colorsps)
                {
                    long colorIndex = (long)(colorsp.Y * cColorWidth + colorsp.X);
                    CameraSpacePoint csp = cameraSpacePoints[colorIndex];
                    camerasps.push_back(csp);
                }
            }
            delete[] cameraSpacePoints;
        }
    }
    ReleaseDepthFrame();
    return hr;
}

我沒有得到任何錯誤,但是,結果似乎旋轉了180度並且有一個偏移量。 有誰有人建議我做錯了什么? 任何幫助表示贊賞。

只是為了更好地了解我為什么需要這個:

我正在使用open cv跟蹤從彩色圖像粘貼在桌面上的彩色膠帶。 然后我在3D網格中的磁帶位置創建牆。 此外,我正在使用KinectFusion生成表格中其他對象的網格。 但是,當我在Meshlab中打開兩個網格時,可以清楚地看到錯位。 我假設在CameraSpace中正確創建了KinectFusion的網格,並且我在上面的函數返回的CameraSpacePoints中創建了牆的網格,我很確定錯誤在於CoordinateMapping過程。

顯示錯位的圖像,可以發現http://imgur.com/UsrEdZb,ZseN2br#0http://imgur.com/UsrEdZb,ZseN2br#1

我終於明白了:無論出於何種原因,返回的CameraSpacePoints在X和Y的原點被鏡像,但不在Z.如果有人對此有解釋,我仍然感興趣。

它現在使用以下代碼:

/// <summary>
/// Maps coordinates from ColorSpace to CameraSpace
/// Expects that the Points in ColorSpace are mirrored at x (as Kinects returns it by default).
/// </summary>
HRESULT ModelRecognizer::MapColorToCameraCoordinates(const std::vector<ColorSpacePoint>& colorsps, std::vector<CameraSpacePoint>& camerasps)
{
    //Access frame
    HRESULT hr = GetDepthFrame();

    if (SUCCEEDED(hr))
    {
        ICoordinateMapper* pMapper;
        hr = m_pKinectSensor->get_CoordinateMapper(&pMapper);
        if (SUCCEEDED(hr))
        {
            CameraSpacePoint* cameraSpacePoints = new CameraSpacePoint[cColorWidth * cColorHeight];
            hr = pMapper->MapColorFrameToCameraSpace(nDepthWidth * nDepthHeight, depthImageBuffer, cColorWidth * cColorHeight, cameraSpacePoints);
            if (SUCCEEDED(hr))
            {
                for (ColorSpacePoint colorsp : colorsps)
                {
                    int colorX = static_cast<int>(colorsp.X + 0.5f);
                    int colorY = static_cast<int>(colorsp.Y + 0.5f);
                    long colorIndex = (long)(colorY * cColorWidth + colorX);
                    CameraSpacePoint csp = cameraSpacePoints[colorIndex];
                    camerasps.push_back(CameraSpacePoint{ -csp.X, -csp.Y, csp.Z });
                }
            }
            delete[] cameraSpacePoints;
        }
    }
    ReleaseDepthFrame();
    return hr;
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM