简体   繁体   中英

homography and image scaling in opencv

I am calculating an homography between two images img1 and img2 (the images contain mostly one planar object, so the homography works well between them) using standard methods in OpenCV in python. Namely, I compute point matches between the images using sift and then call cv2.findHomography .

To make the computation faster I scale down the two images into small1 and small2 and perform the calculations on these smaller copies, so I calculate the homography matrix H , which maps small1 into small2 . However, at the end, I would like to use calculate the homography matrix to project one full-size image img1 onto the other the full-size image img2 .

I thought I could simply transform the homography matrix H in the following way H_full_size = A * H * A_inverse where A is the matrix representing the scaling from img1 to small1 and A_inverse is its inverse. However, that does not work. If I apply cv2.warpPerspective to the scaled down image small1 with H , everything goes as expected and the result (largely) overlaps with small2 . If I apply cv2.warpPerspective to the full size image img1 with H_full_size the result does not map to img2 .

However, if I project the point matches (detected on the scaled down images) using A (using something like projected_pts = cv2.perspectiveTransform(pts, A) ) and then I calculate H_full_size from these, everything works fine.

Any idea what I could be doing wrong here?

The way I see it, the problem is that homography applies a perspective projection which is a non linear transformation (it is linear only while homogeneous coordinates are being used) that cannot be represented as a normal transformation matrix. Multiplying such perspective projection matrix with some other transformations therefore produces undesirable results.

You can try multiplying your original matrix H element wise with:

S = [1,1,scale ; 1,1,scale ; 1/scale, 1/scale, 1]

H_full_size = S * H

where scale is for example 2, if you decreased the size of original image by 2.

I think your wrong assumption in this passage

H_full_size = A * H * A_inverse where A is the matrix representing the scaling from img1 to small1

derives from humans "love" from symmetry. Out of the joke, your formula is correct after introducing an hypotesis that I am going to expose. If I start from this consideration (this is quite equivalent of the cv2 function cv2,warpPerspective - the formula is true with respect to a scale factor)

img2 = H_fullsize*img1

you can derive your own formula.

small2 = B*img2
small1 = A*img1
small2 = H*small1
B*img2 = H*A*img1

that is quite equivalent (if B is invertible)

img2 = B_inverse*H*A*img1

and the question became

H_fullsize = B_inverse*H*A

So the question became: are you sure that the scale matrix from img1 to small1 is equal to the scale matrix from img2 to small2 ? (or at least they differs of a constant scale factor value) ?

If it is your case, remember that, as you write, homograpy works only between planar images (or in a case of pure rotation). Assuming you have 80% SIFT points on a plane and 20% points out of this plane, the homograpy considers all this points as they were in a plane and find the transformation H that minimize errors ( and not the perfect one only for the 80% points in a plane ). Also errors that are evident in a 1080p resolution image could be not so evident in a 320p resolution image (you do not specify how much you reduce the images!)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM