簡體   English   中英

C ++ Kinect v2和Freenect2:如何將深度數據轉換為真實世界的坐標

[英]C++ Kinect v2 & freenect2: how to convert depth data to real world coordinates

我正在嘗試使用Kinect v2相機(在Linux中)計算現實世界的xyz坐標,但是我的計算給我錯誤的結果。

這是代碼:

cv::Point3f xyzWorld={0.0f};

xyzWorld.z = pointDepth;
xyzWorld.x = (float) ((float)x -(depthcx)) * xyzWorld.z / depthfx;
xyzWorld.y = (float) ((float)y - (depthcy)) * xyzWorld.z / depthfy;
xyzWorld.z = pointDepth;

return xyzWorld;

我認為問題是由於fxfycxcy的深度值引起的。

有人能幫我嗎?

我正在使用freenect2。

為什么不只使用OpenNi實現

 OniStatus VideoStream::convertDepthToWorldCoordinates(float depthX, float depthY, float depthZ, float* pWorldX, float* pWorldY, float* pWorldZ)
{
    if (m_pSensorInfo->sensorType != ONI_SENSOR_DEPTH)
    {
        m_errorLogger.Append("convertDepthToWorldCoordinates: Stream is not from DEPTH\n");
        return ONI_STATUS_NOT_SUPPORTED;
    }

    float normalizedX = depthX / m_worldConvertCache.resolutionX - .5f;
    float normalizedY = .5f - depthY / m_worldConvertCache.resolutionY;

    OniVideoMode videoMode;
    int size = sizeof(videoMode);
    getProperty(ONI_STREAM_PROPERTY_VIDEO_MODE, &videoMode, &size);

    float const convertToMillimeters = (videoMode.pixelFormat == ONI_PIXEL_FORMAT_DEPTH_100_UM) ? 10.f : 1.f;
    *pWorldX = (normalizedX * depthZ * m_worldConvertCache.xzFactor) / convertToMillimeters;
    *pWorldY = (normalizedY * depthZ * m_worldConvertCache.yzFactor) / convertToMillimeters;
    *pWorldZ = depthZ / convertToMillimeters;

    return ONI_STATUS_OK;
}

OniStatus VideoStream::convertWorldToDepthCoordinates(float worldX, float worldY, float worldZ, float* pDepthX, float* pDepthY, float* pDepthZ)
{
    if (m_pSensorInfo->sensorType != ONI_SENSOR_DEPTH)
    {
        m_errorLogger.Append("convertWorldToDepthCoordinates: Stream is not from DEPTH\n");
        return ONI_STATUS_NOT_SUPPORTED;
    }

    *pDepthX = m_worldConvertCache.coeffX * worldX / worldZ + m_worldConvertCache.halfResX;
    *pDepthY = m_worldConvertCache.halfResY - m_worldConvertCache.coeffY * worldY / worldZ;
    *pDepthZ = worldZ;
    return ONI_STATUS_OK;
}

和世界轉換緩存:

 void VideoStream::refreshWorldConversionCache()
{
    if (m_pSensorInfo->sensorType != ONI_SENSOR_DEPTH)
    {
        return;
    }

    OniVideoMode videoMode;
    int size = sizeof(videoMode);
    getProperty(ONI_STREAM_PROPERTY_VIDEO_MODE, &videoMode, &size);

    size = sizeof(float);
    float horizontalFov;
    float verticalFov;
    getProperty(ONI_STREAM_PROPERTY_HORIZONTAL_FOV, &horizontalFov, &size);
    getProperty(ONI_STREAM_PROPERTY_VERTICAL_FOV, &verticalFov, &size);

    m_worldConvertCache.xzFactor = tan(horizontalFov / 2) * 2;
    m_worldConvertCache.yzFactor = tan(verticalFov / 2) * 2;
    m_worldConvertCache.resolutionX = videoMode.resolutionX;
    m_worldConvertCache.resolutionY = videoMode.resolutionY;
    m_worldConvertCache.halfResX = m_worldConvertCache.resolutionX / 2;
    m_worldConvertCache.halfResY = m_worldConvertCache.resolutionY / 2;
    m_worldConvertCache.coeffX = m_worldConvertCache.resolutionX / m_worldConvertCache.xzFactor;
    m_worldConvertCache.coeffY = m_worldConvertCache.resolutionY / m_worldConvertCache.yzFactor;
}

struct WorldConversionCache
    {
        float xzFactor;
        float yzFactor;
        float coeffX;
        float coeffY;
        int resolutionX;
        int resolutionY;
        int halfResX;
        int halfResY;
    } m_worldConvertCache;

全部取自OpenNI GitHub存儲庫

您可以直接從每個框架的描述中獲得水平和垂直視野。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM