简体   繁体   中英

How to undo a perspective transform for a single point in opencv

I am trying to do some image analysis using an Inverse Perspective Map. I used the openCV functions getTransform and findHomography to generate a transformation matrix and apply it to the source image. This works well and I am able to get the points from the image I want. The problem is, I don't know how I can take individual point values and undo the transform to draw them back on the original picture. I want to only undo the transform for this set of points to find their original location. How does one do this. The points are in the form Point(x,y) from the openCV library.

To invert a homography (eg perspective transformation) you typically just invert the transformation matrix.

So to transform some points back from your destination image to your source image you invert the transformation matrix and transform those points with the result. To transform a point with a transformation matrix you multiply it from right to the matrix, maybe followed by a de-homogenization.

Luckily, OpenCV provides not only the warpAffine/warpPerspective methods, which transform each pixel of one image to the other image, but there is method to transform single points, too.

Use cv::perspectiveTransform(inputVector, emptyOutputVector, yourTransformation) method to transform a set of points, where

inputVector is a std::vector<cv::Point2f> (you can use a nx2 or 2xn matrix, too, but sometimes that's erroneous). Instead you can use cv::Point3f type, but I'm not sure whether those would be homgeneous coordinate points or 3D points for 3D transformation (or maybe both?).

outputVector is an empty std::vector<cv::Point2f> where the result will be stored

yourTransformation is a double precision 3x3 cv::Mat (like provided by findHomography ) transformation matrix (or 4x4 for 3D points).

Here's a Python example:

import cv2
import numpy as np

# Forward transform
point_transformed = cv2.perspectiveTransform(point_original, trans)

# Reverse transform
inv_trans = np.linalg.pinv(trans)
round_tripped = cv2.perspectiveTransform(point_transformed, inv_trans)

# Now, round_tripped should be approximately equal to point_original

you can use cv::perspectiveTransform(inputVector, emptyOutputVector, yourTransformation) to apply persepective transform on points

Python: cv2.perspectiveTransform(src, m) → dst

src – input two-channel or three-channel floating-point array; each element is a 2D/3D vector to be transformed. m – 3x3 or 4x4 floating-point transformation matrix calculated earlier by cv2.getPerspectiveTransform(_src, _dst)

In python, you have to pass points in a numpy array as shown below:

points_to_be_transformed = np.array([[[0, 0]]], dtype=np.float32)
transfromed_points = cv2.perspectiveTransform(points_to_be_transformed, m)

transfromed_points will also be in the same shape as the input array: points_to_be_transformed

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM