簡體   English   中英

OpenCV:使用StereoCamera系統對顏色標記進行3D姿勢估計

[英]OpenCV: 3D Pose estimation of color markers using StereoCamera system

我有一個立體攝像頭系統,並且使用cv::calibrateCameracv::stereoCalibrate都可以正確校准它。 我的reprojection error似乎還可以:

  • 凸輪0:0.401427
  • 凸輪1:0.388200
  • 立體聲:0.399642

正在進行的校准過程的快照

我通過調用cv::stereoRectify並使用cv::initUndistortRectifyMapcv::remap轉換圖像來檢查校准。 結果顯示如下(我注意到的一件奇怪的事是,在顯示校正后的圖像時,通常在一個或有時甚至兩個圖像上都存在原始圖像變形副本形式的偽影):

整改

我還可以在閾值HSV圖像上使用cv::findContours正確估計像素坐標中標記的位置。

在此處輸入圖片說明

不幸的是,當我現在嘗試cv::triangulatePoints與估計的坐標相比,我的結果非常差,尤其是在x方向上:

P1 = {   58 (±1),  150 (±1), -90xx (±2xxx)  } (bottom)
P2 = {  115 (±1),  -20 (±1), -90xx (±2xxx)  } (right)
P3 = { 1155 (±6),  575 (±3), 60xxx (±20xxx) } (top-left)

這些是相機坐標中以毫米為單位的結果。 兩台攝像機的位置距離棋盤約550毫米,正方形尺寸為13毫米。 顯然,我的結果甚至與我的期望值(負和巨大的z坐標)甚至都不盡相同。

所以我的問題是:

  1. 我非常仔細地跟蹤了stereo_calib.cpp示例,並且似乎至少在視覺上可以獲得良好的結果(請參見重投影錯誤)。 為什么我的三角剖分結果這么差?
  2. 如何將結果轉換為現實世界的坐標系,以便實際上可以定量檢查結果? 我是否必須按照此處所示手動進行操作,還是有一些OpenCV函數可以解決此問題?

這是我的代碼:

std::vector<std::vector<cv::Point2f> > imagePoints[2];
std::vector<std::vector<cv::Point3f> > objectPoints;

imagePoints[0].resize(s->nrFrames);
imagePoints[1].resize(s->nrFrames);
objectPoints.resize( s->nrFrames );

// [Obtain image points..]
// cv::findChessboardCorners, cv::cornerSubPix

// Calc obj points
for( int i = 0; i < s->nrFrames; i++ )
    for( int j = 0; j < s->boardSize.height; j++ )
        for( int k = 0; k < s->boardSize.width; k++ )
            objectPoints[i].push_back( Point3f( j * s->squareSize, k * s->squareSize, 0 ) );


// Mono calibration
cv::Mat cameraMatrix[2], distCoeffs[2];
cameraMatrix[0] = cv::Mat::eye(3, 3, CV_64F);
cameraMatrix[1] = cv::Mat::eye(3, 3, CV_64F);

std::vector<cv::Mat> tmp0, tmp1;

double err0 = cv::calibrateCamera( objectPoints, imagePoints[0], cv::Size( 656, 492 ),
    cameraMatrix[0], distCoeffs[0], tmp0, tmp1,    
    CV_CALIB_FIX_PRINCIPAL_POINT + CV_CALIB_FIX_ASPECT_RATIO );
std::cout << "Cam0 reproj err: " << err0 << std::endl;

double err1 = cv::calibrateCamera( objectPoints, imagePoints[1], cv::Size( 656, 492 ),
    cameraMatrix[1], distCoeffs[1], tmp0, tmp1,
    CV_CALIB_FIX_PRINCIPAL_POINT + CV_CALIB_FIX_ASPECT_RATIO );
std::cout << "Cam1 reproj err: " << err1 << std::endl;

// Stereo calibration
cv::Mat R, T, E, F;

double err2 = cv::stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
    cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1],
    cv::Size( 656, 492 ), R, T, E, F,
    cv::TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
    CV_CALIB_USE_INTRINSIC_GUESS + // because of mono calibration
    CV_CALIB_SAME_FOCAL_LENGTH +
    CV_CALIB_RATIONAL_MODEL +
    CV_CALIB_FIX_K3 + CV_CALIB_FIX_K4 + CV_CALIB_FIX_K5);
std::cout << "Stereo reproj err: " << err2 << std::endl;

// StereoRectify
cv::Mat R0, R1, P0, P1, Q;
Rect validRoi[2];
cv::stereoRectify( cameraMatrix[0], distCoeffs[0],
            cameraMatrix[1], distCoeffs[1],
            cv::Size( 656, 492 ), R, T, R0, R1, P0, P1, Q,
            CALIB_ZERO_DISPARITY, 1, cv::Size( 656, 492 ), &validRoi[0], &validRoi[1]);


// [Track marker..]
// cv::cvtColor, cv::inRange, cv::morphologyEx, cv::findContours, cv::fitEllipse, *calc ellipsoid centers*

// Triangulation
unsigned int N = centers[0].size();


cv::Mat pnts3D;
cv::triangulatePoints( P0, P1, centers[0], centers[1], pnts3D );


cv::Mat t = pnts3D.t();
cv::Mat pnts3DT = cv::Mat( N, 1, CV_32FC4, t.data );

cv::Mat resultPoints; 
cv::convertPointsFromHomogeneous( pnts3DT, resultPoints );

最后, resultPoints應該包含相機坐標中我3D位置的列向量。

編輯:我刪除了一些不必要的轉換以縮短代碼

Edit2:我使用@marols得到的結果建議三角剖分的解決方案

定性和定量三角測量結果

P1 = { 111,  47, 526 } (bottom-right)
P2 = {  -2,   2, 577 } (left)
P3 = {  64, -46, 616 } (top)

我的回答將集中在建議對triangulatePoints的另一種解決方案。 在立體視覺的情況下,可以通過以下方式使用立體校正返回的矩陣Q:

std::vector<cv::Vec3f> surfacePoints, realSurfacePoints;

unsigned int N = centers[0].size();
for(int i=0;i<N;i++) {
    double d, disparity;
    // since you have stereo vision system in which cameras lays next to 
    // each other on OX axis, disparity is measured along OX axis
    d = T.at<double>(0,0);
    disparity = centers[0][i].x - centers[1][i].x;

    surfacePoints.push_back(cv::Vec3f(centers[0][i].x, centers[0][i].y, disparity));
}

cv::perspectiveTransform(surfacePoints, realSurfacePoints, Q);

請根據您的代碼調整以下代碼段,我可能會犯一些錯誤,但是重點是創建一個cv :: Vec3f數組,每個數組都具有以下結構:(point.x,point.y,第二個圖像上的點之間的視差),然后將其傳遞給PerspectiveTransform方法(有關更多詳細信息,請參閱文檔 )。 如果您想進一步了解矩陣Q的創建方式(基本上代表從圖像到真實世界點的“反向”投影),請參閱第435頁的“學習OpenCV”。

在我開發的立體視覺系統中,所描述的方法效果很好,並且在更大的校准誤差(例如1.2)下也能給出正確的結果。

要投影到現實世界的坐標系統中,您需要Projection攝像機矩陣。 可以這樣完成:

cv::Mat KR = CalibMatrix * R;
cv::Mat eyeC = cv::Mat::eye(3,4,CV_64F);
eyeC.at<double>(0,3) = -T.at<double>(0);
eyeC.at<double>(1,3) = -T.at<double>(1);
eyeC.at<double>(2,3) = -T.at<double>(2);

CameraMatrix = cv::Mat(3,4,CV_64F);
CameraMatrix.at<double>(0,0) = KR.at<double>(0,0) * eyeC.at<double>(0,0) + KR.at<double>(0,1) * eyeC.at<double>(1,0) + KR.at<double>(0,2) * eyeC.at<double>(2,0);
CameraMatrix.at<double>(0,1) = KR.at<double>(0,0) * eyeC.at<double>(0,1) + KR.at<double>(0,1) * eyeC.at<double>(1,1) + KR.at<double>(0,2) * eyeC.at<double>(2,1);
CameraMatrix.at<double>(0,2) = KR.at<double>(0,0) * eyeC.at<double>(0,2) + KR.at<double>(0,1) * eyeC.at<double>(1,2) + KR.at<double>(0,2) * eyeC.at<double>(2,2);
CameraMatrix.at<double>(0,3) = KR.at<double>(0,0) * eyeC.at<double>(0,3) + KR.at<double>(0,1) * eyeC.at<double>(1,3) + KR.at<double>(0,2) * eyeC.at<double>(2,3);
CameraMatrix.at<double>(1,0) = KR.at<double>(1,0) * eyeC.at<double>(0,0) + KR.at<double>(1,1) * eyeC.at<double>(1,0) + KR.at<double>(1,2) * eyeC.at<double>(2,0);
CameraMatrix.at<double>(1,1) = KR.at<double>(1,0) * eyeC.at<double>(0,1) + KR.at<double>(1,1) * eyeC.at<double>(1,1) + KR.at<double>(1,2) * eyeC.at<double>(2,1);
CameraMatrix.at<double>(1,2) = KR.at<double>(1,0) * eyeC.at<double>(0,2) + KR.at<double>(1,1) * eyeC.at<double>(1,2) + KR.at<double>(1,2) * eyeC.at<double>(2,2);
CameraMatrix.at<double>(1,3) = KR.at<double>(1,0) * eyeC.at<double>(0,3) + KR.at<double>(1,1) * eyeC.at<double>(1,3) + KR.at<double>(1,2) * eyeC.at<double>(2,3);
CameraMatrix.at<double>(2,0) = KR.at<double>(2,0) * eyeC.at<double>(0,0) + KR.at<double>(2,1) * eyeC.at<double>(1,0) + KR.at<double>(2,2) * eyeC.at<double>(2,0);
CameraMatrix.at<double>(2,1) = KR.at<double>(2,0) * eyeC.at<double>(0,1) + KR.at<double>(2,1) * eyeC.at<double>(1,1) + KR.at<double>(2,2) * eyeC.at<double>(2,1);
CameraMatrix.at<double>(2,2) = KR.at<double>(2,0) * eyeC.at<double>(0,2) + KR.at<double>(2,1) * eyeC.at<double>(1,2) + KR.at<double>(2,2) * eyeC.at<double>(2,2);
CameraMatrix.at<double>(2,3) = KR.at<double>(2,0) * eyeC.at<double>(0,3) + KR.at<double>(2,1) * eyeC.at<double>(1,3) + KR.at<double>(2,2) * eyeC.at<double>(2,3);

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM