简体   繁体   中英

Opencv camera pose estimation

I am currently working on a project that requires me to find the pose of a camera using the opencv library. I am working on an iPod and currently take video input and find keypoints and descriptors using ORB and matching points using BruteForceMatcher from two frames in quasi realtime (it's highly unoptomised as of right now). I'm not sure if this is necessary but I also filter the matches so only matches that map both ways are drawn ie k->k1 and k1->k

I have the intrinsic parameters of the camera I am using as well as 2D keypoints. From this I am hoping to find the position of the camera (I assume these are the extrinsic parameters of rotation and translation).

Although I have looked through numerous tutorials a lot of this has gone a bit over my head and I need some guidance as to what method would work as well as an explanation. Most of the tutorials used a square of set reference points however I have no markers to use other than the keypoints pulled out of the frames.

From what I understand the steps are:

a) Find corresponding keypoints

b) Identify fundamental matrix

c) Estimate essential matrix

d) Decompose essential matrix into rotation and translation vectors

However, beyond step a) I am stuck.

From your statement I understood that you have already get a set of 2D correspondences which you can feed into cvFindFundamentalMat.This finds the fundamental matrix which relates the two perspectives. Namely, for each point p in camera-1 and corresponding point p' in camera- 2, p'Fp = 0. The calculated fundamental matrix may be passed further to ComputeCorrespondEpilines function in opencv that finds the epipolar lines corresponding to the specified points. It can also be passed to StereoRectifyUncalibrated function to compute the rectification transformation. There you get the rotation and translation (up to a scaling) between the two camera's coordinate systems.

essential matrix is a metric object pertaining to calibrated cameras, while the fundamental matrix describes the correspondence in more general and fundamental terms of projective geometry so I don't think you need the essential matrix in your case.All info in contained in the fundamental matrix.Also rectification transformations is computed without knowing intrinsic parameters of the cameras and their relative position in space.

If you are not using a chessboard calibration pattern but instead using generic objects or image you are correct you need to find the correspondences either manually or with a robust feature extractor and matcher ORB,MSER,SURF,SIFT,FAST

Furthermore I suggest to refer the opencv documentation here I hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM