简体   繁体   English

我的OpenCV重映射()有什么问题?

[英]What is wrong with my OpenCV remap()ing?

I took the code in the answer https://stackoverflow.com/a/10374811/4828720 from Image transformation in OpenCV and tried to adapt it to an image of mine. 我从OpenCV中的Image转换中获取了答案https://stackoverflow.com/a/10374811/4828720 ,并尝试将其调整为我的图像。

My source image: 我的源图片: 我的源图片

In it, I identified the pixel coordinates of the centers of the checkered bricks, illustrated here: 在其中,我确定了方格砖中心的像素坐标,如下图所示:

来源点

My target resolution is 784. I calculated the destination coordinates of the pixels. 我的目标分辨率是784.我计算了像素的目标坐标。 My resulting code is this: 我得到的代码是这样的:

import cv2
from scipy.interpolate import griddata
import numpy as np

source = np.array([
    [315, 15],
    [962, 18],
    [526, 213],
    [754, 215],
    [516, 434],
    [761, 433],
    [225, 701],
    [1036, 694],
], dtype=int)

destination = np.array([
     [14, 14],
     [770, 14],
     [238, 238],
     [546, 238],
     [238, 546],
     [546, 546],
     [14, 770],
     [770, 770]
], dtype=int)

source_image = cv2.imread('frames.png')

grid_x, grid_y = np.mgrid[0:783:784j, 0:783:784j]
grid_z = griddata(destination, source, (grid_x, grid_y), method='cubic')
map_x = np.append([], [ar[:,1] for ar in grid_z]).reshape(784,784)
map_y = np.append([], [ar[:,0] for ar in grid_z]).reshape(784,784)
map_x_32 = map_x.astype('float32')
map_y_32 = map_y.astype('float32')
warped_image = cv2.remap(source_image, map_x_32, map_y_32, cv2.INTER_CUBIC)
cv2.imwrite("/tmp/warped2.png", warped_image)

If I run this, none of the source points end up at their intended destination, but I get a warped mess instead. 如果我运行它,没有任何源点最终到达预定的目的地,但我得到一个扭曲的混乱。 I added the destination points on top here: 我在这里添加了目的地点:

我的结果

Where am I going wrong? 我哪里错了? I noticed that my grid and map arrays are not as nicely distributed as the ones in the example. 我注意到我的网格和地图数组的分布不如示例中那样好。 Do I have too few points? 点数太少了吗? Do I need them in a regular grid? 我是否需要在常规网格中使用它们? I tried only using the four points in the outer corners with no luck either. 我只尝试使用外角的四个点而没有运气。

If you only have 8 points for warping an no real distortion in your image, I'd suggest to use perspective transformation as described here . 如果您只有8个点用于扭曲图像中没有真正的失真,我建议使用此处所述的透视变换。

The link you are quoting tries to eliminate additional distortions which lead to non-straight lines, but all lines in your image are straight. 您引用的链接会尝试消除导致非直线的其他扭曲,但图像中的所有线条都是直线。

Code would look like: 代码看起来像:

import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('image.png')
rows,cols,ch = img.shape

pts1 = np.float32([
    [315, 15],
    [962, 18],
    [225, 701],
    [1036, 694],
], dtype=int)

pts2 = np.float32([
     [14, 14],
     [770, 14],
     [14, 770],
     [770, 770]
], dtype=int)

M = cv2.getPerspectiveTransform(pts1,pts2)

dst = cv2.warpPerspective(img,M,(784,784))

plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
plt.show()

在此输入图像描述

The whole problem was that I, again, got confused by numpy's row/column indexing instead of x/y. 整个问题是我再次被numpy的行/列索引而不是x / y弄糊涂了。 Someone in the #opencv IRC channel pointed it out. #opencv IRC频道中有人指出了这一点。 My source and destination arrays had to have their columns switched: 我的源和目标数组必须切换其列:

source = np.array([
    [15, 315],
    [18, 962],
    [213, 526],
    [215, 754],
    [434, 516],
    [433, 761],
    [701, 225],
    [694, 1036],
], dtype=int)

destination = np.array([
     [14, 14],
     [14, 770],
     [238, 238],
     [238, 546],
     [546, 238],
     [546, 546],
     [770, 14],
     [770, 770]
], dtype=int)

Then it worked as intended (ignore the ugly warping, this was a simplified list of coordinates to find the bug): 然后它按预期工作(忽略丑陋的扭曲,这是一个简化的坐标列表来找到bug):

在此输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM