[英]Why epiplolar geometry/sfm does not give proper values in python opencv
I tried to find rotation/translation between two images.我试图找到两个图像之间的旋转/平移。
For simplest case, I use two exact same images, and checked whether it gives 0 translations and rotations(identity matrix).对于最简单的情况,我使用两个完全相同的图像,并检查它是否给出了 0 个平移和旋转(身份矩阵)。
However it does not give the results that I expected.但是,它没有给出我预期的结果。 why???
为什么???
ORB feature is used and ten matched features are used to find Essential matrix and R/t.使用 ORB 特征并使用十个匹配特征来查找基本矩阵和 R/t。 Result is (they are identical images):
结果是(它们是相同的图像):
t = [[ 0.57735027] [-0.57735027] [ 0.57735027]]
r = [[-0.33333333 -0.66666667 0.66666667]
[-0.66666667 -0.33333333 -0.66666667]
[ 0.66666667 -0.66666667 -0.33333333]]
What I expected is :我期望的是:
t = [[0, 0, 0]]
r = [[1, 0, 0], [0, 1, 0], [0, 0, 0]]
Why it does not give strange results?为什么它不会给出奇怪的结果?
orb = cv2.ORB_create()
img1 = self.img1
img2 = self.img2
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
kpts1, descs1 = orb.detectAndCompute(gray1, None)
kpts2, descs2 = orb.detectAndCompute(gray2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(descs1, descs2)
dmatches = sorted(matches, key=lambda x: x.distance)
src_pts = np.float32([kpts1[m.queryIdx].pt for m in dmatches]).reshape(-1, 1, 2)
src_pts = src_pts[0:10]
dst_pts = np.float32([kpts2[m.trainIdx].pt for m in dmatches]).reshape(-1, 1, 2)
dst_pts = dst_pts[0:10]
K = np.array([[842.102288, 0., 263.697271],
[0., 833.300569, 536.024168],
[0., 0., 1.]])
E, mask2 = cv2.findEssentialMat(src_pts, dst_pts, K, cv2.RANSAC, 0.999, 1.0);
points, R, t, mask = cv2.recoverPose(E, src_pts, dst_pts)
sometimes we need to test to find the problem, so it will be very useful if you put the link to the image.有时候我们需要通过测试来发现问题,所以如果你把链接放到图片上会非常有用。
However, there are a few things you should know:但是,您应该了解以下几点:
So for a first step, I ask you to test this code, while waiting for the publication of your image因此,第一步,我请您测试此代码,同时等待您的图像发布
orb = cv2.ORB_create(nfeatures = 2000)
img1 = self.img1
img2 = self.img2
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
kpts1, descs1 = orb.detectAndCompute(gray1, None)
kpts2, descs2 = orb.detectAndCompute(gray2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(descs1, descs2)
dmatches = sorted(matches, key=lambda x: x.distance)
src_pts = np.float32([kpts1[m.queryIdx].pt for m in dmatches]).reshape(-1, 1, 2)
dst_pts = np.float32([kpts2[m.trainIdx].pt for m in dmatches]).reshape(-1, 1, 2)
K = np.array([[842.102288, 0., 263.697271],
[0., 833.300569, 536.024168],
[0., 0., 1.]])
E, inliers_mask_E = cv2.findEssentialMat(src_pts, dst_pts, K method = cv2.RANSAC, prob = 0.999, threshold = 1.0);
points, R, t, inliers_mask_RP = cv2.recoverPose(E, src_pts, dst_pts, cameraMatrix = K, mask = inliers_mask_E)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.