简体   繁体   English

来自前景蒙版图像的视差图

[英]Disparity map from foreground masked images

I am trying to extract the disparity map of foreground objects in an image. 我正在尝试提取图像中前景对象的视差图。 The foreground objects are extracted using color and the final purpose is to determine the coordinates of the extracted objects. 使用颜色提取前景对象,最终目的是确定提取对象的坐标。 Below is the masked left image of the view with reddish objects extracted 下面是蒙版视图的蒙版左侧图像,其中提取了带红色的对象

在此处输入图片说明

and then there is the right image 然后有正确的图像

在此处输入图片说明

The background is basically a giant window that I want to be ignored and I only care to find the position of the reddish (or any color I later choose) objects. 背景基本上是一个巨大的窗口,我想忽略它,而我只关心找到带红色(或以后选择的任何颜色)对象的位置。

After playing around with the parameters of the SGBM algorithm in the OpenCV example, mainly 在OpenCV示例中试用了SGBM算法的参数之后,主要

int SADWindowSize
int minDisparity
int numberOfDisparities

I was not able to get satisfying results, more exactly the algorithm was not able to deal very well with the uniform texture of the masked parts. 我无法获得令人满意的结果,更确切地说,该算法无法很好地处理蒙版部件的均匀纹理。 I will post two examples to illustrate. 我将发布两个示例进行说明。 The SADWindowSize is the only parameter varied in those examples because it gives the most distinctive results. 在这些示例中, SADWindowSize是唯一变化的参数,因为它提供了最独特的结果。

Example 1: with smaller window size = 9 and number of disparities = 64 示例1:较小的窗口大小= 9并且视差数量= 64

在此处输入图片说明

Example 2: with smaller window size = 23 and number of disparities = 64 示例2:窗口尺寸较小= 23,视差数量= 64

在此处输入图片说明

The bigger window size gives more smeared results that are undesirable. 较大的窗口大小会产生更多不理想的拖尾结果。

The question: Is it a wrong approach to mask the background when calculating the disparity map? 问题:在计算视差图时掩盖背景是否错误? Another possible approach is to calculate the disparity map then apply the mask but I am not sure about the plausibility of the results in this case. 另一种可能的方法是计算视差图,然后应用蒙版,但在这种情况下,我不确定结果是否合理。

Note that the cameras and calibrated and the images (and masks) are rectified. 请注意,照相机和校准过的图像和(和遮罩)已校正。

Masking before calculating depth map doesn't have sense, because algorithm need to compare specified space to find corresponding pixels. 在计算深度图之前进行遮罩是没有意义的,因为算法需要比较指定的空间以找到对应的像素。 Using mask cause lack of information due to lot of black pixels. 由于大量的黑色像素,使用遮罩会导致信息不足。 So, what you trying is intuitive for us, but application can't easily determine which pixels represent the same point. 因此,您的尝试对我们来说很直观,但是应用程序无法轻松确定哪些像素代表同一点。

I'm not sure but if you use mask for left view on disparity you should get what you're expecting. 我不确定,但是如果您使用蒙版在视差上使用左视图,您应该会得到期望的结果。 Or mask an output from reprojectImageTo3D() . 或屏蔽reprojectImageTo3D()的输出。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM