简体   繁体   English

OpenCV:了解warpPerspective / perspective转换

[英]OpenCV: Understanding warpPerspective / perspective transform

I made a small example for myself to play around with OpenCVs wrapPerspective, but the output is not completely as I expected. 我为自己做了一个小例子来玩OpenCVs wrapPerspective,但输出并不像我预期的那样完全。

My input is a bar at an 45° angle. 我的输入是45°角的条形。 I want to transform it so that it's vertically aligned / at an 90° angle. 我想对它进行变换,使其垂直对齐/成90°角。 No problem with that. 没问题。 However, what I don't understand is that everything around the actual destination points is black. 但是,我不明白的是,实际目的地点周围的一切都是黑色的。 The reason I don't understand this is, that actually only the transformation matrix gets passed to the wrapPerspective function, not the destination points themselves. 我不明白这一点的原因是,实际上只有转换矩阵被传递给wrapPerspective函数,而不是目标点本身。 So my expected output would be a bar at an 90° angle and most around it to be yellow instead of black. 所以我的预期输出是90°角的条形,大多数是黄色而不是黑色。 Where's my error in reasoning? 我在推理中的错误在哪里?

# helper function
def showImage(img, title):
    fig = plt.figure()
    plt.suptitle(title)
    plt.imshow(img)


# read and show test image
img = mpimg.imread('test_transform.jpg')
showImage(img, "input image")


# source points
top_left = [194,430]
top_right = [521,103]
bottom_right = [549,131]
bottom_left = [222,458]
pts = np.array([bottom_left,bottom_right,top_right,top_left])


# target points
y_off = 400; # y offset
top_left_dst = [top_left[0], top_left[1] - y_off]
top_right_dst = [top_left_dst[0] + 39.6, top_left_dst[1]]
bottom_right_dst = [top_right_dst[0], top_right_dst[1] + 462.4]
bottom_left_dst = [top_left_dst[0], bottom_right_dst[1]]
dst_pts = np.array([bottom_left_dst, bottom_right_dst, top_right_dst, top_left_dst])

# generate a preview to show where the warped bar would end up
preview=np.copy(img)
cv2.polylines(preview,np.int32([dst_pts]),True,(0,0,255), 5)
cv2.polylines(preview,np.int32([pts]),True,(255,0,255), 1)
showImage(preview, "preview")


# calculate transformation matrix
pts = np.float32(pts.tolist())
dst_pts = np.float32(dst_pts.tolist())
M = cv2.getPerspectiveTransform(pts, dst_pts)

# wrap image and draw the resulting image
image_size = (img.shape[1], img.shape[0])
warped = cv2.warpPerspective(img, M, dsize = image_size, flags = cv2.INTER_LINEAR)
showImage(warped, "warped")

The result using this code is: 使用此代码的结果是:

在此输入图像描述

Here's my input image test_transform.jpg: 这是我的输入图像test_transform.jpg: 在此输入图像描述 And here is the same image with coordinates added: 以下是添加了坐标的相同图像: 在此输入图像描述

By request, here is the transformation matrix: 根据要求,这是转换矩阵:

[[  6.05504680e-02  -6.05504680e-02   2.08289910e+02]
 [  8.25714275e+00   8.25714275e+00  -5.12245707e+03]
 [  2.16840434e-18   3.03576608e-18   1.00000000e+00]]

Your ordering in your arrays or their positions might be the fault. 您在阵列中的排序或位置可能是错误的。 Check this Transformed Image: The dst_pts array is: np.array([[196,492],[233,494],[234,32],[196,34]]), thats more or less like the blue rectangle in your preview image.(I made the coordinates myself to make sure they are right) NOTE: Your source and destination points should be in right order 检查这个转换后的图像:dst_pts数组是:np.array([[196,492],[233,494],[234,32],[196,34]]),这或多或少像预览图像中的蓝色矩形。 (我自己制作坐标以确保它们是正确的) 注意:您的源点和目标点应该是正确的顺序

在此输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM