简体   繁体   English

如何正确使用 cv::triangulatePoints()

[英]How to correctly use cv::triangulatePoints()

I am trying to triangulate some points with OpenCV and I found this cv::triangulatePoints() function. The problem is that there is almost no documentation or examples of it.我试图用 OpenCV 对一些点进行三角剖分,我发现了这个cv::triangulatePoints() function。问题是几乎没有文档或示例。

I have some doubts about it.我对此有些怀疑。

  1. What method does it use?它使用什么方法? I've making a small research about triangulations and there are several methods (Linear, Linear LS, eigen, iterative LS, iterative eigen,...) but I can't find which one is it using in OpenCV.我对三角剖分进行了一项小型研究,有几种方法(线性、线性 LS、本征、迭代 LS、迭代本征等),但我找不到它在 OpenCV 中使用的是哪一种。

  2. How should I use it?我应该如何使用它? It seems that as an input it needs a projection matrix and 3xN homogeneous 2D points.似乎作为输入,它需要一个投影矩阵和3xN 个均匀的2D点。 I have them defined as std::vector<cv::Point3d> pnts , but as an output it needs 4xN arrays and obviously I can't create a std::vector<cv::Point4d> because it doesn't exist, so how should I define the output vector?我将它们定义为std::vector<cv::Point3d> pnts ,但作为 output 它需要4xN arrays 显然我无法创建std::vector<cv::Point4d>因为它不存在,那么我应该如何定义 output 向量呢?

For the second question I tried: cv::Mat pnts3D(4,N,CV_64F);对于我尝试的第二个问题: cv::Mat pnts3D(4,N,CV_64F); and cv::Mat pnts3d;cv::Mat pnts3d; , neither seems to work (it throws an exception). ,似乎都不起作用(它抛出异常)。

1.- The method used is Least Squares. 1.- 使用的方法是最小二乘。 There are more complex algorithms than this one. 有比这更复杂的算法。 Still it is the most common one, as the other methods may fail in some cases (ie some others fails if points are on plane or on infinite). 它仍然是最常见的方法,因为其他方法在某些情况下可能会失败(即,如果点在平面上或无限大,则其他方法可能会失败)。

The method can be found in Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman (p312) 该方法可以在Richard Hartley和Andrew Zisserman (p312)的“ 计算机视觉中的多视图几何”中找到。

2.- The usage : 2.- 用法

cv::Mat pnts3D(1,N,CV_64FC4);
cv::Mat cam0pnts(1,N,CV_64FC2);
cv::Mat cam1pnts(1,N,CV_64FC2);

Fill the 2 chanel point Matrices with the points in images. 用图像中的点填充2个香奈尔点矩阵。

cam0 and cam1 are Mat3x4 camera matrices (intrinsic and extrinsic parameters). cam0cam1Mat3x4相机矩阵(内部和外部参数)。 You can construct them by multiplying A*RT, where A is the intrinsic parameter matrix and RT the rotation translation 3x4 pose matrix. 您可以通过乘以A * RT来构造它们,其中A是固有参数矩阵,而RT是旋转平移3x4姿势矩阵。

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

NOTE : pnts3D NEEDs to be a 4 channel 1xN cv::Mat when defined, throws exception if not, but the result is a cv::Mat(4,N,cv_64FC1) matrix. 注意pnts3D需要在定义时为4通道1xN cv::Mat ,否则将引发异常,但结果是cv::Mat(4,N,cv_64FC1)矩阵。 Really confusing, but it is the only way I didn't got an exception. 确实令人困惑,但这是我没有例外的唯一方法。


UPDATE : As of version 3.0 or possibly earlier, this is no longer true, and pnts3D can also be of type Mat(4,N,CV_64FC1) or may be left completely empty (as usual, it is created inside the function). 更新 :从3.0版或更早版本开始,这不再是正确的, pnts3D也可以是Mat(4,N,CV_64FC1)类型Mat(4,N,CV_64FC1)或者可以完全留空(通常,它是在函数内部创建的)。

A small addition to @Ander Biguri's answer. @Ander Biguri的答案的一小部分内容。 You should get your image points on a non- undistort ed image, and invoke undistortPoints() on the cam0pnts and cam1pnts , because cv::triangulatePoints expects the 2D points in normalized coordinates (independent from the camera) and cam0 and cam1 should be only [R|t^T] matricies you do not need to multiple it with A . 您应该在未undistort图像上获取图像点,并在cam0pntscam1pnts上调用undistortPoints() ,因为cv::triangulatePoints期望归一化坐标系中的2D点(独立于相机),而cam0cam1应该仅[R | t ^ T]矩阵不需要与A乘以。

Thanks to Ander Biguri! 感谢Ander Biguri! His answer helped me a lot. 他的回答对我很有帮助。 But I always prefer the alternative with std::vector, I edited his solution to this: 但我总是更喜欢使用std :: vector的替代方法,为此我编辑了他的解决方案:

std::vector<cv::Point2d> cam0pnts;
std::vector<cv::Point2d> cam1pnts;
// You fill them, both with the same size...

// You can pick any of the following 2 (your choice)
// cv::Mat pnts3D(1,cam0pnts.size(),CV_64FC4);
cv::Mat pnts3D(4,cam0pnts.size(),CV_64F);

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

So you just need to do emplace_back in the points. 因此,您只需要在要点中执行emplace_back。 Main advantage: you do not need to know the size N before start filling them. 主要优点:开始填充它们之前,您不需要知道大小N Unfortunately, there is no cv::Point4f, so pnts3D must be a cv::Mat... 不幸的是,没有cv :: Point4f,所以pnts3D必须是cv :: Mat ...

I tried cv::triangulatePoints, but somehow it calculates garbage. 我尝试了cv :: triangulatePoints,但是不知何故它会计算垃圾。 I was forced to implement a linear triangulation method manually, which returns a 4x1 matrix for the triangulated 3D point: 我被迫手动实施线性三角剖分方法,该方法为三角剖分的3D点返回4x1矩阵:

Mat triangulate_Linear_LS(Mat mat_P_l, Mat mat_P_r, Mat warped_back_l, Mat warped_back_r)
{
    Mat A(4,3,CV_64FC1), b(4,1,CV_64FC1), X(3,1,CV_64FC1), X_homogeneous(4,1,CV_64FC1), W(1,1,CV_64FC1);
    W.at<double>(0,0) = 1.0;
    A.at<double>(0,0) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(0,0);
    A.at<double>(0,1) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(0,1);
    A.at<double>(0,2) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(0,2);
    A.at<double>(1,0) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(1,0);
    A.at<double>(1,1) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(1,1);
    A.at<double>(1,2) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(1,2);
    A.at<double>(2,0) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(0,0);
    A.at<double>(2,1) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(0,1);
    A.at<double>(2,2) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(0,2);
    A.at<double>(3,0) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(1,0);
    A.at<double>(3,1) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(1,1);
    A.at<double>(3,2) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(1,2);
    b.at<double>(0,0) = -((warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(0,3));
    b.at<double>(1,0) = -((warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(1,3));
    b.at<double>(2,0) = -((warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(0,3));
    b.at<double>(3,0) = -((warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(1,3));
    solve(A,b,X,DECOMP_SVD);
    vconcat(X,W,X_homogeneous);
    return X_homogeneous;
}

the input parameters are two 3x4 camera projection matrices and a corresponding left/right pixel pair (x,y,w). 输入参数是两个3x4相机投影矩阵和一个对应的左/右像素对(x,y,w)。

另外,您也可以使用Hartley&Zisserman的方法,在此实现: http : //www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/

Additionally to Ginés Hidalgo comments,除了 Ginés Hidalgo 的评论,

if you did a stereocalibration and could estimate exactly Fundamental Matrix from there, which was calculated based on checkerboard.如果您进行了立体校准并且可以从那里准确估计基本矩阵,该矩阵是基于棋盘格计算的。

Use correctMatches function refine detected keypoints使用正确匹配函数细化检测到的关键点

std::vector<cv::Point2f> pt_set1_pt_c, pt_set2_pt_c;
cv::correctMatches(F,pt_set1_pt,pt_set2_pt,pt_set1_pt_c,pt_set2_pt_c)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM