简体   繁体   中英

How to get a new coordinates after image rotation?

I'm trying to get new coordinates after image rotation. But, I have a coordinates that are relative coordinate. All four coordinates are comprised of values between 0 and 1. For example (x1, y1) = [0.15, 0.15] (x2, y2) = [0.8, 0.15] (x3, y3) = [0.8, 0.8] (x4, y4) = [0.15, 0.8] I want to get the new x, y coordinates when I rotate the image by n degrees.

image = Image.open(os.path.join('./AlignImages', image_name))

labels = np.array(list(map(float, a.split(" ")[1:]))).astype('float32')
#if labels == [0.1 0.1 0.5 0.1 0.5 0.5 0.1 0.5] [x1 y1 x2 y2 x3 y3 x4 y4]  
labels = np.vstack((labels[0::2], labels[1::2]))
# [0.1 0.5 0.5 0.1]    [x1 x2 x3 x4]
# [0.1 0.1 0.5 0.5]    [y1 y2 y3 y4]
print(labels)

labels = np.array([[labels[0][0]-0.5, labels[0][1]-0.5, labels[0][2]-0.5, labels[0][3]-0.5],[0.5-labels[1][0], 0.5-labels[1][1], 0.5-labels[1][2], 0.5-labels[1][3]]])
#This is to move the center point of the image.
#Adjust the value to rotate because the upper left corner of the image is (0, 0)
image = image.rotate(rotation_scale, expand=True)
#I gave the option to expand the image so that the rotated image was not cropped.

image.show()
rotation_ = np.array([[np.cos(rotation_scale), (np.sin(rotation_scale))],[-1*np.sin(rotation_scale), np.cos(rotation_scale)]])
#I have defined a transformation matrix.

src = np.matmul(rotation_, labels)
#Multiply the transformation matrix by the coordinates to obtain the new coordinates.


src = np.array([[src[0][0]+0.5, src[0][1]+0.5, src[0][2]+0.5, src[0][3]+0.5],[0.5+src[1][0], 0.5+src[1][1], 0.5+src[1][2], 0.5+src[1][3]]])
#Let the top left corner be 0, 0 again.

print(src)

[[ 0.24779222  1.00296445  0.7265248  -0.05902794]
 [ 0.8065444   0.41615766  0.2350563   0.60667523]]

However, this code does not seem to work. I thought I could get four relative coordinates of the rotated image in that source code, but it was not at all. I want to get the relative coordinates of the four vertices in the expanded image (the rotated image). The values should all be between 0 and 1. How do I get the four coordinates I want?

The problem may come from the center point that you rotate points around. Based on my experience in my last project, you need to know the center point and degree

For example: Your image was rotated 90 degree to the right around the center point (the middle point of image), now you need to rotate the points back -90 degree around the center point. The code in c++ (I am sorry that I am familiar with c++ only but you can easily port to python i think)

// the center point 
Point2f center=(width/2,height/2)

//the angle to rotate, in radiant 
// in your case it is -90 degree
double theta_deg= angleInDegree * 180 /M_PI;

// get the matrix to rotate
Mat rotateMatrix = getRotationMatrix2D(center, theta_deg, 1.0);

// the vector to get landmark points
std::vector<cv::Point> inputLandmark;
std::vector<cv::Point> outputLandmark;

// we use the same rotate matrix and use transform
cv::transform(inputLandmark, outputLandmark, rotateMatrix);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM