简体   繁体   English

如何在随机点云上拟合点网格

[英]how do I fit a grid of points on a random point cloud

I have a binary image with dots, which I obtained using OpenCV's goodFeaturesToTrack, as shown on Image1. 我有一个带点的二进制图像,我使用OpenCV的goodFeaturesToTrack获得,如Image1所示。

Image1 : Cloud of points Image1:积分之云

I would like to fit a grid of 4*25 dots on it, such as the on shown on Image2 (Not all points are visible on the image, but it is a regular 4*25 points rectangle). 我想在它上面放一个4 * 25点的网格,例如Image2上显示的网格(并非所有点都在图像上可见,但它是一个常规的4 * 25点矩形)。

Image2 : Model grid of points Image2:点的模型网格

My model grid of 4*25 dots is parametrized by : 1 - The position of the top left corner 2 - The inclination of the rectangle with the horizon The code below shows a function that builds such a model. 我的4 * 25点模型网格由以下参数化:1 - 左上角的位置2 - 矩形与地平线的倾斜度下面的代码显示了构建此类模型的函数。

This problem seems to be close to a chessboard corner problem. 这个问题似乎接近棋盘角落的问题。

I would like to know how to fit my model cloud of points to the input image and get the position and angle of the cloud. 我想知道如何使我的模型云点适应输入图像并获得云的位置和角度。 I can easily measure a distance in between the two images (the input one and the on with the model grid) but I would like to avoid having to check every pixel and angle on the image for finding the minimum of this distance. 我可以很容易地测量两个图像之间的距离(输入一个和模型网格上的开),但我想避免检查图像上的每个像素和角度以找到该距离的最小值。

def ModelGrid(pos, angle, shape):

    # Initialization of output image of size shape
    table = np.zeros(shape)

    # Parameters 
    size_pan = [32, 20]# Pixels
    nb_corners= [4, 25]
    index = np.ndarray([nb_corners[0], nb_corners[1], 2],dtype=np.dtype('int16'))
    angle = angle*np.pi/180

    # Creation of the table
    for i in range(nb_corners[0]):
        for j in range(nb_corners[1]):
            index[i,j,0] = pos[0] + j*int(size_pan[1]*np.sin(angle)) + i*int(size_pan[0]*np.cos(angle))
            index[i,j,1] = pos[1] + j*int(size_pan[1]*np.cos(angle)) - i*int(size_pan[0]*np.sin(angle))

            if 0 < index[i,j,0] < table.shape[0]:
                if 0 < index[i,j,1] < table.shape[1]:
                    table[index[i,j,0], index[i,j,1]] = 1

    return table

A solution I found, which works relatively well is the following : 我找到的解决方案相对较好,如下所示:

First, I create an index of positions of all positive pixels, just going through the image. 首先,我创建一个所有正像素的位置索引,只是通过图像。 I will call these pixels corners. 我会称这些像素为角落。

I then use this index to compute an average angle of inclination : For each of the corners, I look for others which would be close enough in certain areas, as to define a cross. 然后我使用这个指数计算平均倾斜角度:对于每个角落,我寻找在某些区域足够接近的其他角落,以定义一个十字架。 I manage, for each pixel to find the ones that are directly on the left, right, top and bottom of it. 我管理,为每个像素找到直接位于其左侧,右侧,顶部和底部的像素。 I use this cross to calculate an inclination angle, and then use the median of all obtained inclination angles as the angle for my model grid of points. 我使用这个十字来计算倾斜角度,然后使用所有获得的倾斜角度的中值作为我的模型网格点的角度。

Once I have this angle, I simply build a table using this angle and the positions of each corner. 一旦我有这个角度,我只需使用这个角度和每个角落的位置建立一个桌子。 The optimization function measures the number of coincident pixels on both images, and returns the best position. 优化功能测量两个图像上的重合像素数,并返回最佳位置。

This way works fine for most examples, but the returned 'best position' has to be one of the corners, which does not imply that it corresponds to the best position... Mainly if the top left corner of the grid within the cloud of corners is missing. 这种方式适用于大多数示例,但返回的“最佳位置”必须是其中一个角,这并不意味着它对应于最佳位置...主要是如果网格的左上角在云中角落不见了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM