简体   繁体   English

Disparity map中的OpenCv深度估计

[英]OpenCv depth estimation from Disparity map

I'm trying to estimate depth from a stereo pair images with OpenCV. 我正在尝试使用OpenCV估计立体对图像的深度。 I have disparity map and depth estimation can be obtained as: 我有视差图和深度估计可以获得如下:

             (Baseline*focal)
depth  =     ------------------
           (disparity*SensorSize)

I have used Block Matching technique to find the same points in the two rectificated images. 我使用块匹配技术在两个校正图像中找到相同的点。 OpenCV permits to set some block matching parameter, for example BMState->numberOfDisparities . OpenCV允许设置一些块匹配参数,例如BMState->numberOfDisparities

After block matching process: 块匹配过程后:

cvFindStereoCorrespondenceBM( frame1r, frame2r, disp, BMState);
cvConvertScale( disp, disp, 16, 0 );
cvNormalize( disp, vdisp, 0, 255, CV_MINMAX );

I found depth value as: 我发现深度值为:

if(cvGet2D(vdisp,y,x).val[0]>0)
   {
   depth =((baseline*focal)/(((cvGet2D(vdisp,y,x).val[0])*SENSOR_ELEMENT_SIZE)));
   }

But the depth value obtaied is different from the value obtaied with the previous formula due to the value of BMState->numberOfDisparities that changes the result value. 但由于BMState->numberOfDisparities的值改变了结果值,所获得的深度值与使用前一个公式获得的值不同。

How can I set this parameter? 如何设置此参数? what to change this parameter? 怎么改变这个参数?

Thanks 谢谢

The simple formula is valid if and only if the motion from left camera to right one is a pure translation (in particular, parallel to the horizontal image axis). 当且仅当从左摄像机到右摄像机的运动是纯平移(特别是平行于水平图像轴)时,简单公式才有效。

In practice this is hardly ever the case. 实际上,情况并非如此。 It is common, for example, to perform the matching after rectifying the images, ie after warping them using a known Fundamental Matrix, so that corresponding pixels are constrained to belong to the same row. 例如,通常在校正图像之后,即在使用已知的基本矩阵对其进行变形之后执行匹配,使得相应的像素被约束为属于同一行。 Once you have matches on the rectified images, you can remap them onto the original images using the inverse of the rectifying warp, and then triangulate into 3D space to reconstruct the scene. 在校正后的图像上进行匹配后,可以使用整形扭曲的反转将它们重新映射到原始图像上,然后三角测量到3D空间中以重建场景。 OpenCV has a routine to do that: reprojectImageTo3d OpenCV有一个例行程序: reprojectImageTo3d

The formula you mentioned above wont work as the camera plane and the image plane is not same ie the camera will be situated at some height and the plane it captures will be on the ground. 上面提到的公式不能用作相机平面和图像平面不相同,即相机将位于某个高度,并且它捕获的平面将在地面上。 So, you have to do a little modification in this formula. 所以,你必须对这个公式做一点修改。 You can fit these disparity values and known distance on a polynomial by curve fitting .From it you will get the coefficients which can be used for other unknown distances. 您可以通过曲线拟合多项式上拟合这些视差值和已知距离。从中您将获得可用于其他未知距离的系数。 2nd way is to create a 3d Point Cloud using wrap matrix and reprojectimageTo3d (Opencv API) . 第二种方法是使用wrap matrix和reprojectimageTo3d(Opencv API)创建一个3d Point Cloud。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM