简体   繁体   English

如何使用OpenCV 3.0 StereoSGBM和PCL生成一对立体图像的有效点云表示

[英]How to generate a valid point cloud representation of a pair of stereo images using OpenCV 3.0 StereoSGBM and PCL

I have recently started working with OpenCV 3.0 and my goal is to capture a pair of stereo images from a set of stereo cameras, create a proper disparity map, convert the disparity map to a 3D point cloud and finally show the resulting point cloud in a point-cloud viewer using PCL. 我最近开始使用OpenCV 3.0,我的目标是从一组立体相机中捕获一对立体图像,创建一个合适的视差图,将视差图转换为3D点云,最后在一个点云中显示结果点云。使用PCL的点云查看器。

I have already performed the camera calibration and the resulting calibration RMS is 0.4 我已经进行了相机校准,结果校准RMS为0.4

You can find my image pairs (Left Image) 1 and (Right Image) 2 in the links below. 您可以在下面的链接中找到我的图像对(左图) 1和(右图) 2 I am using StereoSGBM in order to create disparity image. 我正在使用StereoSGBM来创建视差图像。 I am also using track-bars to adjust StereoSGBM function parameters in order to obtain better disparity image. 我也使用轨迹条来调整StereoSGBM函数参数,以获得更好的视差图像。 Unfortunately I can't post my disparity image since I am new to StackOverflow and don't have enough reputation to post more than two image links! 不幸的是,我不能发布我的差异图像,因为我是StackOverflow的新手并且没有足够的声誉来发布两个以上的图像链接!

After getting the disparity image ("disp" in the code below), I use the reprojectImageTo3D() function to convert the disparity image information to XYZ 3D coordinate, and then I convert the results into an array of "pcl::PointXYZRGB" points so they can be shown in a PCL point cloud viewer. 获取视差图像(下面的代码中的“disp”)后,我使用reprojectImageTo3D()函数将视差图像信息转换为XYZ 3D坐标,然后将结果转换为“pcl :: PointXYZRGB”点的数组这样它们就可以在PCL点云查看器中显示出来。 After performing the required conversion, what I get as a point cloud is a silly pyramid shape point-cloud which does not make any sense. 在执行所需的转换后,我得到的点云是一个愚蠢的金字塔形状的点云,没有任何意义。 I have already read and tried all of the suggested methods in the following links: 我已阅读并尝试了以下链接中的所有建议方法:

1- http: //blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html 1- http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html

2- http: //stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud 2- http://stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud

3- http: //stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv 3- http://stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv

and non of them worked!!! 他们没有工作!!!

Below I provided the conversion portion of my code, it would be greatly appreciated if you could tell me what I am missing: 下面我提供了我的代码的转换部分,如果您能告诉我我缺少的内容,将不胜感激:

pcl::PointCloud<pcl::PointXYZRGB>::Ptr pointcloud(new   pcl::PointCloud<pcl::PointXYZRGB>());
    Mat xyz;
    reprojectImageTo3D(disp, xyz, Q, false, CV_32F);
    pointcloud->width = static_cast<uint32_t>(disp.cols);
    pointcloud->height = static_cast<uint32_t>(disp.rows);
    pointcloud->is_dense = false;
    pcl::PointXYZRGB point;
    for (int i = 0; i < disp.rows; ++i)
        {
            uchar* rgb_ptr = Frame_RGBRight.ptr<uchar>(i);
            uchar* disp_ptr = disp.ptr<uchar>(i);
            double* xyz_ptr = xyz.ptr<double>(i);

            for (int j = 0; j < disp.cols; ++j)
            {
                uchar d = disp_ptr[j];
                if (d == 0) continue;
                Point3f p = xyz.at<Point3f>(i, j);

                point.z = p.z;   // I have also tried p.z/16
                point.x = p.x;
                point.y = p.y;

                point.b = rgb_ptr[3 * j];
                point.g = rgb_ptr[3 * j + 1];
                point.r = rgb_ptr[3 * j + 2];
                pointcloud->points.push_back(point);
            }
        }
    viewer.showCloud(pointcloud);

After doing some work and some research I found my answer and I am sharing it here so other readers can use. 做了一些工作和一些研究后,我找到了答案,我在这里分享,以便其他读者可以使用。

Nothing was wrong with the conversion algorithm from the disparity image to 3D XYZ (and eventually to a point cloud). 从视差图像到3D XYZ(最终到点云)的转换算法没有任何问题。 The problem was the distance of the objects (that I was taking pictures of) to the cameras and amount of information that was available for the StereoBM or StereoSGBM algorithms to detect similarities between the two images (image pair). 问题是物体(我正在拍摄照片)与摄像机之间的距离以及可用于StereoBM或StereoSGBM算法的信息量,以检测两个图像(图像对)之间的相似性。 In order to get proper 3D point cloud it is required to have a good disparity image and in order to have a good disparity image (assuming you have performed good calibration) make sure of the followings: 为了获得适当的3D点云,需要具有良好的视差图像,并且为了获得良好的视差图像(假设您已经执行了良好的校准),请确保以下内容:

1- There should be enough detectable and distinguishable common features available between the two frames (right and left frame). 1-两个框架(右框架和左框架)之间应该有足够的可检测和可区分的共同特征。 The reason being is that StereoBM or StereoSGBM algorithms look for common features between the two frames and they can easily be fooled by similar things in the two frames which may not necessarily belong to the same objects. 原因在于StereoBM或StereoSGBM算法在两个帧之间寻找共同特征,并且它们很容易被两个帧中的类似事物欺骗,这两个帧可能不一定属于相同的对象。 I personally think these two matching algorithms have lots of room for improvement. 我个人认为这两种匹配算法有很大的改进空间。 So beware of what you are looking at with your cameras. 所以要注意你用相机看的东西。

2- Objects of interest (the ones that you are interested to have their 3D point cloud model) should be within a certain distance to your cameras. 2-感兴趣的对象(您有兴趣拥有3D点云模型的对象)应与摄像机保持一定距离。 The bigger the base-line is (base line is the distance between the two cameras), the further your objects of interest (targets) can be. 基线越大(基线是两个摄像机之间的距离),您感兴趣的对象(目标)就越远。

A noisy and distorted disparity image never generates a good 3D point cloud. 嘈杂和失真的视差图像永远不会产生良好的3D点云。 One thing you can do to improve your disparity images is to use track-bars in your applications so you can adjust the StereoSBM or StereoSGBM parameters until you can see good results (clear and smooth disparity image). 您可以做的一件事就是在您的应用程序中使用跟踪条,以便您可以调整StereoSBM或StereoSGBM参数,直到您看到良好的结果(清晰和平滑的视差图像)。 Code below is a small and simple example on how to generate track-bars (I wrote it as simple as possible). 下面的代码是关于如何生成跟踪条的一个简单的小例子(我尽可能简单地写了它)。 Use as required: 根据需要使用:

 int PreFilterType = 0, PreFilterCap = 0, MinDisparity = 0, UniqnessRatio = 0, TextureThreshold = 0,
    SpeckleRange = 0, SADWindowSize = 5, SpackleWindowSize = 0, numDisparities = 0, numDisparities2 = 0, PreFilterSize = 5;


            Ptr<StereoBM> sbm = StereoBM::create(numDisparities, SADWindowSize);  

while(1)
{
            sbm->setPreFilterType(PreFilterType);
            sbm->setPreFilterSize(PreFilterSize);  
            sbm->setPreFilterCap(PreFilterCap + 1);
            sbm->setMinDisparity(MinDisparity-100);
            sbm->setTextureThreshold(TextureThreshold*0.0001);
            sbm->setSpeckleRange(SpeckleRange);
            sbm->setSpeckleWindowSize(SpackleWindowSize);
            sbm->setUniquenessRatio(0.01*UniqnessRatio);
            sbm->setSmallerBlockSize(15);
            sbm->setDisp12MaxDiff(32);

            namedWindow("Track Bar Window", CV_WINDOW_NORMAL);
            cvCreateTrackbar("Number of Disparities", "Track Bar Window", &PreFilterType, 1, 0);
            cvCreateTrackbar("Pre Filter Size", "Track Bar Window", &PreFilterSize, 100);
            cvCreateTrackbar("Pre Filter Cap", "Track Bar Window", &PreFilterCap, 61);
            cvCreateTrackbar("Minimum Disparity", "Track Bar Window", &MinDisparity, 200);
            cvCreateTrackbar("Uniqueness Ratio", "Track Bar Window", &UniqnessRatio, 2500);
            cvCreateTrackbar("Texture Threshold", "Track Bar Window", &TextureThreshold, 10000);
            cvCreateTrackbar("Speckle Range", "Track Bar Window", &SpeckleRange, 500);
            cvCreateTrackbar("Block Size", "Track Bar Window", &SADWindowSize, 100);
            cvCreateTrackbar("Speckle Window Size", "Track Bar Window", &SpackleWindowSize, 200);
            cvCreateTrackbar("Number of Disparity", "Track Bar Window", &numDisparities, 500);

            if (PreFilterSize % 2 == 0)
            {
                PreFilterSize = PreFilterSize + 1;
            }


            if (PreFilterSize2 < 5)
            {
                PreFilterSize = 5;
            }

            if (SADWindowSize % 2 == 0)
            {
                SADWindowSize = SADWindowSize + 1;
            }

            if (SADWindowSize < 5)
            {
                SADWindowSize = 5;
            }


            if (numDisparities % 16 != 0)
            {
                numDisparities = numDisparities + (16 - numDisparities % 16);
            }
        }
}

If you are not getting proper results and smooth disparity image, don't get disappointed. 如果您没有得到正确的结果和平滑的视差图像,请不要失望。 Try using the OpenCV sample images (the one with an orange desk lamp in it) with your algorithm to make sure you have the correct pipe-line and then try taking pictures from different distances and play with StereoBM/StereoSGBM parameters until you can get something useful. 尝试使用OpenCV样本图像(带有橙色台灯的图像)和算法,以确保您拥有正确的管道,然后尝试从不同距离拍摄照片并使用StereoBM / StereoSGBM参数,直到您可以获得某些内容有用。 I used my own face for this purpose and since I had a very small baseline, I came very close to my cameras (Here is a link to my 3D face point-cloud picture, and hey, don't you dare laughing!!!) 1 .I was very happy of seeing myself in 3D point-cloud form after a week of struggling. 我为此目的使用了自己的脸,因为我的基线非常小,所以我非常接近我的相机(这是我的3D脸部点云图片的链接,嘿,你不敢笑! ) 1 。经过一周的挣扎,我很高兴看到自己处于3D点云状态。 I have never been this happy of seeing myself before!!! 我以前见过自己从未如此幸福! ;) ;)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM