简体   繁体   中英

Struggling with steps for 3D reconstruction (Matlab)

We've been asked to do 3D reconstruction (masters module for PhD), and I'm pulling my hair out. I'm not sure if I'm missing any steps, or if I've done them wrong. I've tried to google the code, and replace their functions with mine, just to see if I can get correct results from that, which I can't.

I'll just go through the steps of what I'm doing so far, and I hope one of you can tell me I'm missing something obvious:

Images I'm using: http://imgur.com/a/UbshI

  • Load calibration left and right images, click on corresponding points to get P1 and P2

  • Use RQ decomp to get K1 & K2 (and R1, R2, t1, t2, but I don't seem to use them anywhere. Originally I tried doing R = R1*R2', t = t2-t1 to create my new P2 after setting P1 to be canonical (I|0), but that didn't work either).

  • Set P1 to be canonical (I | 0)

  • Calculate fundamental matrix F, and corresponding points im1, im2 using RANSAC.

  • Get colour of pixels at the points

  • Get essential matrix E by doing K2' * F * K1

  • Get the 4 different projection matrices from E, and then select right one

  • Triangulate matches using P1, P2, im1, im2 to get 3D points

  • Use scatter plot to plot 3D points, giving them the RGB value of the pixel at that point.

  • My unsatisfactory result:

    http://imgur.com/OZXXBEC

At the moment, since I'm not getting ANYWHERE, I'd like to go for the simplest option and work my way up. FYI, I'm using matlab. If anyone's got any tips at all, I'd really love to hear them.

Turns out to be a weird reason why it wasn't working. I was using matlab's detectSURFFeatures , which gives inaccurate matching pairs. I never assumed it to be wrong, but one of my coursemates had the same issue. I changed it to use detectEigenMinFeatures instead and it works fine, Here's my result now, it's not perfect, but it's much: much better:

在此处输入图像描述

If you already have P1 and P2, then you can simply triangulate matching pairs of points from the two images. There is no need to estimate the fundamental matrix.

If you only have the intrinsics (K for a single camera, or K1 and K2 for two different cameras), then you approach is valid:

  1. Estimate fundamental matrix
  2. Get essential matrix
  3. Decompose E into R and t
  4. Set P1 to canonical, and compute P2 from K, R, and t.
  5. Triangulate matching points using P1 and P2.

This approach is illustrated in an example in the Computer Vision System Toolbox.

In either case, you should check your code carefully, and make sure all the matrices make sense. MATLAB's convention is to multiply a row vector by a matrix, while many textbooks multiply a matrix by a column vector. So matrices may need to be transposed.

And before that, plot your point matches using showMatchedFeatures to make sure those make sense.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM