简体   繁体   English

OpenCV undistortPoints和triangulatePoint给出奇数结果(立体声)

[英]OpenCV undistortPoints and triangulatePoint give odd results (stereo)

I'm trying to get 3D coordinates of several points in space, but I'm getting odd results from both undistortPoints() and triangulatePoints() . 我试图获得空间中几个点的3D坐标,但是我从undistortPoints()triangulatePoints()得到奇怪的结果。

Since both cameras have different resolution, I've calibrated them separately, got RMS errors of 0,34 and 0,43 , then used stereoCalibrate() to get more matrices, got an RMS of 0,708 , and then used stereoRectify() to get remaining matrices. 由于两个摄像头具有不同的分辨率,我已经单独校准它们,获得了0,340,43 RMS误差,然后使用stereoCalibrate()来获得更多矩阵,得到RMS为0,708 ,然后使用stereoRectify()来获取剩下的矩阵。 With that in hand I've started the work on gathered coordinates, but I get weird results. 有了这个,我已经开始了收集坐标的工作,但我得到了奇怪的结果。

For example, input is: (935, 262) , and the undistortPoints() output is (1228.709125, 342.79841) for one point, while for another it's (934, 176) and (1227.9016, 292.4686) respectively. 例如,输入为: (935, 262) (1228.709125, 342.79841)undistortPoints()输出为(1228.709125, 342.79841)一个点,而另一个点分别为(1227.9016, 292.4686) (934, 176)(1227.9016, 292.4686) Which is weird, because both of these points are very close to the middle of the frame, where distortions are the smallest. 这很奇怪,因为这两个点都非常接近框架的中间,其中扭曲是最小的。 I didn't expect it to move them by 300 pixels. 我没想到它会将它们移动300像素。

When passed to traingulatePoints() , the results get even stranger - I've measured the distance between three points in real life (with a ruler), and calculated the distance between pixels on each picture. 当传递给traingulatePoints() ,结果变得更加奇怪 - 我测量了现实生活中三个点之间的距离(用尺子),并计算了每张图片上像素之间的距离。 Because this time the points were on a pretty flat plane, these two lengths (pixel and real) matched, as in |AB|/|BC| 因为这次点在一个非常平坦的平面上,这两个长度(像素和实际)匹配,如| AB | / | BC | in both cases was around 4/9. 两种情况都是4/9左右。 However, triangulatePoints() gives me results off the rails, with |AB|/|BC| 但是, triangulatePoints()通过| AB | / | BC |给出了结果 being 3/2 or 4/2. 是3/2或4/2。

This is my code: 这是我的代码:

double pointsBok[2] = { bokList[j].toFloat()+xBok/2, bokList[j+1].toFloat()+yBok/2 };
cv::Mat imgPointsBokProper = cv::Mat(1,1, CV_64FC2, pointsBok);

double pointsTyl[2] = { tylList[j].toFloat()+xTyl/2, tylList[j+1].toFloat()+yTyl/2 };
//cv::Mat imgPointsTyl = cv::Mat(2,1, CV_64FC1, pointsTyl);
cv::Mat imgPointsTylProper = cv::Mat(1,1, CV_64FC2, pointsTyl);

cv::undistortPoints(imgPointsBokProper, imgPointsBokProper, 
      intrinsicOne, distCoeffsOne, R1, P1);
cv::undistortPoints(imgPointsTylProper, imgPointsTylProper, 
      intrinsicTwo, distCoeffsTwo, R2, P2);

cv::triangulatePoints(P1, P2, imgWutBok, imgWutTyl, point4D);

double wResult = point4D.at<double>(3,0);
double realX = point4D.at<double>(0,0)/wResult;
double realY = point4D.at<double>(1,0)/wResult;
double realZ = point4D.at<double>(2,0)/wResult;

The angles between points are kinda sorta good but usually not: 点之间的角度有点好,但通常不是:

`7,16816    168,389 4,44275` vs `5,85232    170,422 3,72561` (degrees)
`8,44743    166,835 4,71715` vs `12,4064    158,132 9,46158`
`9,34182    165,388 5,26994` vs `19,0785    150,883 10,0389`

I've tried to use undistort() on the entire frame, but got results just as odd. 我试图在整个帧上使用undistort() ,但结果却是奇怪的。 The distance between B and C points should be pretty much unchanged at all times, and yet this is what I get: B和C点之间的距离应始终保持不变,但这是我得到的:

7502,42     
4876,46 
3230,13 
2740,67 
2239,95 

Frame by frame. 一帧一帧。

Pixel distance (bottom) vs real distance (top) - should be very similar: 像素距离(底部)与实际距离(顶部) - 应该非常相似: | BC |距离

Angle: 角度:

ABC角度

Also, shouldn't both undistortPoints() and undistort() give the same results (another set of videos here)? 另外, undistortPoints()undistort() undistortPoints()应该给出相同的结果(这里有另一组视频)吗?
在此输入图像描述

The function cv::undistort does undistortion and reprojection in one go. 函数cv :: undistort一次完成非失真和重投影。 It performs the following list of operations: 它执行以下操作列表:

  1. undo camera projection (multiplication with the inverse of the camera matrix) 撤消摄像机投影(乘以摄像机矩阵的倒数)
  2. apply the distortion model to undo the distortion 应用失真模型来撤消失真
  3. rotate by the provided Rotation matrix R1/R2 通过提供的旋转矩阵R1 / R2旋转
  4. project points to image using the provided Projection matrix P1/P2 项目使用提供的投影矩阵P1 / P2指向图像

If you pass the matrices R1, P1 resp. 如果你传递矩阵R1,P1。 R2, P2 from cv::stereoCalibrate(), the input points will be undistorted and rectified. 来自cv :: stereoCalibrate()的R2,P2,输入点将不失真并得到纠正。 Rectification means that the images are transformed in a way such that corresponding points have the same y-coordinate. 校正意味着以对应点具有相同y坐标的方式变换图像。 There is no unique solution for image rectification, as you can apply any translation or scaling to both images, without changing the alignement of corresponding points. 没有唯一的图像校正解决方案,因为您可以对两个图像应用任何平移或缩放,而无需更改相应点的对齐。 That being said, cv::stereoCalibrate() can shift the center of projection quite a bit (eg 300 pixels). 话虽这么说,cv :: stereoCalibrate()可以将投影中心移动很多(例如300像素)。 If you want pure undistortion you can pass an Identity Matrix (instead of R1) and the original camera Matrix K (instead of P1). 如果你想要纯粹的不失真,你可以传递一个身份矩阵(而不是R1)和原始相机Matrix K(而不是P1)。 This should lead to pixel coordinates similar to the original ones. 这应该导致像素坐标类似于原始像素坐标。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM