简体   繁体   中英

missing region in disparity map

i am currently working on stereo processing using opencv2.3 and a Pointgrey Bumblebee2 stereocamera as an input device. Acquiring images is done via the libdc1394 .

My code for rectification and stereo processing is the following:

void StereoProcessing::calculateDisparityMap(const Mat &left, const Mat &right, Mat &disparity_map)

  Mat map11, map12, map21, map22, left_rectified, right_rectified, disp16;

  // Computes the undistortion and rectification transformation maps
  initUndistortRectifyMap(this->camera_matrix1,
        this->distance_coefficients1,
        this->R1,
        this->P1,
        this->output_image_size,
        CV_16SC2,
        map11,
        map12);
  initUndistortRectifyMap(this->camera_matrix2,
        this->distance_coefficients2,
        this->R2,
        this->P2,
        this->output_image_size,
        CV_16SC2,
        map21,
        map22);

  // creates rectified images
  remap(left, left_rectified, map11, map12, INTER_LINEAR);
  remap(right, right_rectified, map21, map22, INTER_LINEAR);

  // calculates 16-bit disparitymap
  this->stereo_bm(left_temp, right_temp, disp16);

  disp16.convertTo(disparity_map, CV_8U, 255 / (this->stereo_bm.state->numberOfDisparities * 16.0));
}

This works fine except for a black left border in the disparity map, which is the following:

左侧有黑色边框的视差图

The input images are these two - unrectified as you can see ;) : 左未校正的图像正确的未校正图像

So my question is now: Is this normal behaviour? Or do you see any mistake i have done so far? As another information, the rectification works fine

The width of the missing region is equivalent to the number of disparities used in stereo_bm. It is a normal by-product of stereo_bm algorithm .

I think this happens because the algorithm computes the disparity by matching blocks around pixels from the left image to blocks around pixels in the same row in the right image (assuming images are rectified). Since there is a region where there is no overlap between the views from the left camera and the right camera, the algorithm can't find a match for the blocks around pixels within this region. The width of the "missing" region equals the parameter "number of disparities" because the algorithm gives up trying to match a given block after "number of disparities" attempts (in the same horizontal row as the pixel in the left image). I'm sorry if I wasn't clear enough. If you wish to get more details about how it works, there is some code in http://siddhantahuja.wordpress.com/2010/04/11/correlation-based-similarity-measures-summary/ .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM