简体   繁体   English

如何找到不同角度的两个同心轮廓之间的距离?

[英]How to find the distance between two concentric contours, for different angles?

I have an image with two contours, where one contour is always 'inside' another.我有一个带有两个轮廓的图像,其中一个轮廓总是在另一个“内部”。 I want to find the distance between the two contours for 90 different angles (meaning, distance at every 4 degrees).我想找到 90 个不同角度的两个轮廓之间的距离(意思是每 4 度的距离)。 How do I go about doing it?我该怎么做?

Here's an example image:这是一个示例图像:

在此处输入图片说明

Thank you!谢谢!

In the following code, I have just given you the example for the vertical line, the rest can be obtained by rotating the line.在下面的代码中,我刚刚给出了垂直线的示例,其余的可以通过旋转线获得。 Result looks like this, instead of drawing you can use the coordinates for distance calculation.结果看起来像这样,您可以使用坐标来计算距离,而不是绘图。

在此处输入图片说明

import shapely.geometry as shapgeo
import numpy as np
import cv2


img = cv2.imread('image.jpg', 0)
ret, img =cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)

#Fit the ellipses
_, contours0, hierarchy = cv2.findContours( img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
outer_ellipse = [cv2.approxPolyDP(contours0[0], 0.1, True)]
inner_ellipse = [cv2.approxPolyDP(contours0[2], 0.1, True)]

h, w = img.shape[:2]
vis = np.zeros((h, w, 3), np.uint8)
cv2.drawContours( vis, outer_ellipse, -1, (255,0,0), 1)
cv2.drawContours( vis, inner_ellipse, -1, (0,0,255), 1)

##Extract contour of ellipses
cnt_outer = np.vstack(outer_ellipse).squeeze()
cnt_inner = np.vstack(inner_ellipse).squeeze()

#Determine centroid
M = cv2.moments(cnt_inner)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
print cx, cy

#Draw full segment lines 
cv2.line(vis,(cx,0),(cx,w),(150,0,0),1)

# Calculate intersections using Shapely
# http://toblerity.org/shapely/manual.html
PolygonEllipse_outer= shapgeo.asLineString(cnt_outer)
PolygonEllipse_inner= shapgeo.asLineString(cnt_inner)
PolygonVerticalLine=shapgeo.LineString([(cx,0),(cx,w)])


insecouter= np.array(PolygonEllipse_outer.intersection(PolygonVerticalLine)).astype(np.int)
insecinner= np.array(PolygonEllipse_inner.intersection(PolygonVerticalLine)).astype(np.int)
cv2.line(vis,(insecouter[0,0], insecinner[1,1]),(insecouter[1,0], insecouter[1,1]),(0,255,0),2)
cv2.line(vis,(insecouter[0,0], insecinner[0,1]),(insecouter[1,0], insecouter[0,1]),(0,255,0),2)

cv2.imshow('contours', vis)

0xFF & cv2.waitKey()
cv2.destroyAllWindows()  

Take this image of two sets of two shapes:以两组两个形状的图像为例:

在此处输入图片说明

We want to find the distance between the edges of each set of shapes, including where the edges overlap.我们想要找到每组形状的边缘之间的距离,包括边缘重叠的位置。

  1. First things first, we import the necessary modules:首先,我们导入必要的模块:
import cv2
import numpy as np
  1. To do that, we will first need to retrieve every shape in the image as lists of contours.为此,我们首先需要检索图像中的每个形状作为轮廓列表。 In the above particular example, there are 4 shapes that need to be detected.在上面的特定示例中,需要检测 4 个形状。 To retrieve each shape, we will need to use a mask to mask out every color besides the color of the shape of interest:要检索每个形状,我们需要使用遮罩来遮蔽除感兴趣形状的颜色之外的所有颜色:
def get_masked(img, lower, upper):
    img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    mask = cv2.inRange(img_hsv, np.array(lower), np.array(upper))
    img_mask = cv2.bitwise_and(img, img, mask=mask)
    return img_mask

The lower and upper parameters will determine the minimum HVS values and the maximum HSV values that will not be masked out of the image. lower参数和upper参数将确定不会被图像屏蔽的最小 HVS 值和最大 HSV 值。 Given the right lower and upper parameters, you will be able to extract one image with only the green shapes, and one image with only the blue shapes:有了正确的lowerupper的参数,你就能够提取只与绿色形状一个图像,而只用蓝色的形状,一个形象:

在此处输入图片说明

  1. With the masked images, you can then proceed to process them into more clean contours.使用蒙版图像,您可以继续将它们处理成更干净的轮廓。 Here is the preprocess function, with values that can be tweaked whenever necessary:这是preprocess函数,其值可以在必要时进行调整:
def get_processed(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_blur = cv2.GaussianBlur(img_gray, (7, 7), 7)
    img_canny = cv2.Canny(img_blur, 50, 50)
    kernel = np.ones((7, 7))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
    img_erode = cv2.erode(img_dilate, kernel, iterations=2)
    return img_erode

Passing in the masked images will give you传入蒙版图像会给你

在此处输入图片说明

  1. With the images masked and processed, they will be ready for opencv to detect their contours:图像被屏蔽和处理后,它们将准备好让 opencv 检测它们的轮廓:
def get_contours(img):
    contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
    return [cnt for cnt in contours if cv2.contourArea(cnt) > 500]

The list comprehension at the return statement is there to filter out noise by specifying that every contour must have an area that is greater than 500. return语句中的列表推导式通过指定每个轮廓必须具有大于 500 的面积来过滤噪音。

  1. Now, we will define some basic functions that we will later use:现在,我们将定义一些稍后将使用的基本函数:
def get_centeroid(cnt):
    length = len(cnt)
    sum_x = np.sum(cnt[..., 0])
    sum_y = np.sum(cnt[..., 1])
    return int(sum_x / length), int(sum_y / length)

def get_pt_at_angle(pts, pt, ang):
    angles = np.rad2deg(np.arctan2(*(pt - pts).T))
    angles = np.where(angles < -90, angles + 450, angles + 90)
    found= np.rint(angles) == ang
    if np.any(found):
        return pts[found][0]

The names of the functions are pretty self-explanatory;函数的名称一目了然; the first one returns the center point of a contour, and the second one returns a point in a given array of points, pts , that is at a given angle, ang , relative to a given point, pt .第一个返回轮廓的中心点,第二个返回给定点数组pts中的一个pts ,即相对于给定点pt的给定角度ang The np.where in the get_pt_at_angle function is there to shift the starting angle, 0 , to the positive x axis, as it by default will be at the positive y axis.np.whereget_pt_at_angle功能是有移位的起始角度, 0默认,向正X轴,因为这将是在正y轴。

  1. Time to define the function that will return the distances.是时候定义返回距离的函数了。 First, define it so that these five parameters can be passed in:首先定义它,以便可以传入这五个参数:
def get_distances(img, cnt1, cnt2, center, step):

A brief explanation on each parameter:每个参数的简要说明:

  • img , the image array img ,图像数组
  • cnt1 , the first shape cnt1 ,第一个形状
  • cnt2 , the second shape cnt2 ,第二个形状
  • center , the origin for the distance calculations center ,距离计算的原点
  • step , the number of degrees to be jumped per value step ,每个值要跳跃的度数
  1. Define a dictionary to store the distances, with the angles as key and the distances as values:定义一个字典来存储距离,以角度为键,以距离为值:
    angles = dict()
  1. Loop through each angle you want to retrieve the distance of the edges of the two shapes, and find the coordinate of the two contours that are the ct angle of the iterations, angle , relative to the origin point, center , using the get_pt_at_angle function we defined earlier.循环遍历每个要检索两个形状边缘距离的angle ,并使用get_pt_at_angle函数找到两个轮廓的坐标,即迭代的 ct 角, angle ,相对于原点, center ,我们使用get_pt_at_angle函数之前定义的。
    for angle in range(0, 360, step):
        pt1 = get_pt_at_angle(cnt1, center, angle)
        pt2 = get_pt_at_angle(cnt2, center, angle)
  1. Check if a point exists in both contours that is at the specific angle relative to the origin:检查两个轮廓中是否存在与原点成特定角度的点:
        if np.any(pt1) and np.any(pt2):
  1. You can use the np.linalg.norm method to get the distance between the two points.您可以使用np.linalg.norm方法来获取两点之间的距离。 I also made it draw the text and connecting lines for visualization.我还让它绘制了用于可视化的文本和连接线。 Don't forget to add the angle and value to the angles dictionary, and you can then break out of the inner for loop.不要忘记将角度和值添加到angles字典中,然后您就可以跳出内部for循环。 At the end of the function, return the image that has the text and lines drawn on it:在函数结束时,返回上面绘制了文本和线条的图像:
            d = round(np.linalg.norm(pt1 - pt2))
            cv2.putText(img, str(d), tuple(pt1), cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 0))
            cv2.drawContours(img, np.array([[center, pt1]]), -1, (255, 0, 255), 1)
            angles[angle] = d

    return img, angles
  1. Finally, you can utilize the function defined on an image:最后,您可以利用在图像上定义的函数:
img = cv2.imread("shapes1.png")

img_green = get_masked(img, [10, 0, 0], [70, 255, 255])
img_blue = get_masked(img, [70, 0, 0], [179, 255, 255])

img_green_processed = get_processed(img_green)
img_blue_processed = get_processed(img_blue)

img_green_contours = get_contours(img_green_processed)
img_blue_contours = get_contours(img_blue_processed)

Using the image of four shapes, you can tell that the img_green_contours and img_blue_contours will each contain two contours.使用四个形状的图像,您可以看出img_green_contoursimg_blue_contours将分别包含两个轮廓。 But you might be wondering: how did I choose the minimum and maximum HSV values?但您可能想知道:我是如何选择最小和最大 HSV 值的? Well, I used a trackbar code.好吧,我使用了轨迹条代码。 You can run the below code, adjusting the HSV values using the trackbars until you find a range where everything in the image is masked out (in black) except for the shape you want to retrieve:您可以运行以下代码,使用轨迹栏调整 HSV 值,直到找到一个范围,其中除了要检索的形状外,图像中的所有内容都被屏蔽(黑色):

import cv2
import numpy as np

def empty(a):
    pass
    
cv2.namedWindow("TrackBars")
cv2.createTrackbar("Hue Min", "TrackBars", 0, 179, empty)
cv2.createTrackbar("Hue Max", "TrackBars", 179, 179, empty)
cv2.createTrackbar("Sat Min", "TrackBars", 0, 255, empty)
cv2.createTrackbar("Sat Max", "TrackBars", 255, 255, empty)
cv2.createTrackbar("Val Min", "TrackBars", 0, 255, empty)
cv2.createTrackbar("Val Max", "TrackBars", 255, 255, empty)

img = cv2.imread("shapes0.png")

while True:
    h_min = cv2.getTrackbarPos("Hue Min", "TrackBars")
    h_max = cv2.getTrackbarPos("Hue Max", "TrackBars")
    s_min = cv2.getTrackbarPos("Sat Min", "TrackBars")
    s_max = cv2.getTrackbarPos("Sat Max", "TrackBars")
    v_min = cv2.getTrackbarPos("Val Min", "TrackBars")
    v_max = cv2.getTrackbarPos("Val Max", "TrackBars")
    
    img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

    lower = np.array([h_min, s_min, v_min])
    upper = np.array([h_max, s_max, v_max])
    
    mask = cv2.inRange(img_hsv, lower, upper)
    img_masked = cv2.bitwise_and(img, img, mask=mask)

    cv2.imshow("Image", img_masked)
    if cv2.waitKey(1) & 0xFF == ord("q"): # If you press the q key
        break

With the values I chose, I got:使用我选择的值,我得到了:

在此处输入图片说明

在此处输入图片说明

  1. Loop through the blue shape contours and green shape contours in parallel, and depending on which color shape you want the origin to be at the center of, you can pass that color contour into the get_centeroid function we defined earlier:并行循环遍历蓝色形状轮廓和绿色形状轮廓,根据您希望原点位于哪个颜色形状的中心,您可以将该颜色轮廓传递到我们之前定义的get_centeroid函数中:
for cnt_blue, cnt_green in zip(img_blue_contours, img_green_contours[::-1]):
    center = get_centeroid(cnt_blue)
    img, angles = get_distances(img, cnt_green.squeeze(), cnt_blue.squeeze(), center, 30)
    print(angles)

Notice that I used 30 as the step;请注意,我使用30作为步长; that number can be changed to 4 , I used 30 so the visualization would be more clear.该数字可以更改为4 ,我使用了30以便可视化更清晰。

  1. Finally, we can display the image:最后,我们可以显示图像:
cv2.imshow("Image", img)
cv2.waitKey(0)

Altogether:总而言之:

import cv2
import numpy as np

def get_masked(img, lower, upper):
    img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    mask = cv2.inRange(img_hsv, np.array(lower), np.array(upper))
    img_mask = cv2.bitwise_and(img, img, mask=mask)
    return img_mask

def get_processed(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_blur = cv2.GaussianBlur(img_gray, (7, 7), 7)
    img_canny = cv2.Canny(img_blur, 50, 50)
    kernel = np.ones((7, 7))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
    img_erode = cv2.erode(img_dilate, kernel, iterations=2)
    return img_erode

def get_contours(img):
    contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
    return [cnt for cnt in contours if cv2.contourArea(cnt) > 500]

def get_centeroid(cnt):
    length = len(cnt)
    sum_x = np.sum(cnt[..., 0])
    sum_y = np.sum(cnt[..., 1])
    return int(sum_x / length), int(sum_y / length)

def get_pt_at_angle(pts, pt, ang):
    angles = np.rad2deg(np.arctan2(*(pt - pts).T))
    angles = np.where(angles < -90, angles + 450, angles + 90)
    found= np.rint(angles) == ang
    if np.any(found):
        return pts[found][0]
        
def get_distances(img, cnt1, cnt2, center, step):
    angles = dict()
    for angle in range(0, 360, step):
        pt1 = get_pt_at_angle(cnt1, center, angle)
        pt2 = get_pt_at_angle(cnt2, center, angle)
        if np.any(pt1) and np.any(pt2):
            d = round(np.linalg.norm(pt1 - pt2))
            cv2.putText(img, str(d), tuple(pt1), cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 0))
            cv2.drawContours(img, np.array([[center, pt1]]), -1, (255, 0, 255), 1)
            angles[angle] = d
            
    return img, angles

img = cv2.imread("shapes1.png")

img_green = get_masked(img, [10, 0, 0], [70, 255, 255])
img_blue = get_masked(img, [70, 0, 0], [179, 255, 255])

img_green_processed = get_processed(img_green)
img_blue_processed = get_processed(img_blue)

img_green_contours = get_contours(img_green_processed)
img_blue_contours = get_contours(img_blue_processed)

for cnt_blue, cnt_green in zip(img_blue_contours, img_green_contours[::-1]):
    center = get_centeroid(cnt_blue)
    img, angles = get_distances(img, cnt_green.squeeze(), cnt_blue.squeeze(), center, 30)
    print(angles)

cv2.imshow("Image", img)
cv2.waitKey(0)

Output:输出:

{0: 5, 30: 4, 60: 29, 90: 25, 120: 31, 150: 8, 180: 5, 210: 7, 240: 14, 270: 12, 300: 14, 330: 21}
{0: 10, 30: 9, 60: 6, 90: 0, 120: 11, 150: 7, 180: 5, 210: 6, 240: 6, 270: 4, 300: 0, 330: 16}

在此处输入图片说明

Note: For certain shapes, some angles might be absent in the dictionary.注意:对于某些形状,字典中可能缺少某些角度。 That would be caused by the process function;那将是由process功能引起的; you would get more accurate results if you turn down some of the values, like the blur sigma如果您调低某些值,例如模糊西格玛,您将获得更准确的结果

I borrowed the general idea using Shapely and the basic code from tfv's answer .我从tfv's answer 中借用了使用Shapely的一般思想和基本代码。 Nevertheless, iterating the desired angles, calculating the needed end points for the correct lines to be intersected with the shapes, calculating and storing the distances, etc. were missing, so I added all that.然而,迭代所需的角度、计算与形状相交的正确线所需的端点、计算和存储距离等都没有,所以我添加了所有这些。

That'd be my full code:那将是我的完整代码:

import cv2
import numpy as np
import shapely.geometry as shapgeo

# Read image, and binarize
img = cv2.imread('G48xu.jpg', cv2.IMREAD_GRAYSCALE)
img = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)[1]

# Find (approximated) contours of inner and outer shape
cnts, hier = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
outer = [cv2.approxPolyDP(cnts[0], 0.1, True)]
inner = [cv2.approxPolyDP(cnts[2], 0.1, True)]

# Just for visualization purposes: Draw contours of inner and outer shape
h, w = img.shape[:2]
vis = np.zeros((h, w, 3), np.uint8)
cv2.drawContours(vis, outer, -1, (255, 0, 0), 1)
cv2.drawContours(vis, inner, -1, (0, 0, 255), 1)

# Squeeze contours for further processing
outer = np.vstack(outer).squeeze()
inner = np.vstack(inner).squeeze()

# Calculate centroid of inner contour
M = cv2.moments(inner)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])

# Calculate maximum needed radius for later line intersections
r_max = np.min([cx, w - cx, cy, h - cy])

# Set up angles (in degrees)
angles = np.arange(0, 360, 4)

# Initialize distances
dists = np.zeros_like(angles)

# Prepare calculating the intersections using Shapely
poly_outer = shapgeo.asLineString(outer)
poly_inner = shapgeo.asLineString(inner)

# Iterate angles and calculate distances between inner and outer shape
for i, angle in enumerate(angles):

    # Convert angle from degrees to radians
    angle = angle / 180 * np.pi

    # Calculate end points of line from centroid in angle's direction
    x = np.cos(angle) * r_max + cx
    y = np.sin(angle) * r_max + cy
    points = [(cx, cy), (x, y)]

    # Calculate intersections using Shapely
    poly_line = shapgeo.LineString(points)
    insec_outer = np.array(poly_outer.intersection(poly_line))
    insec_inner = np.array(poly_inner.intersection(poly_line))

    # Calculate distance between intersections using L2 norm
    dists[i] = np.linalg.norm(insec_outer - insec_inner)

    # Just for visualization purposes: Draw lines for some examples
    if (i == 10) or (i == 40) or (i == 75):

        # Line from centroid to end points
        cv2.line(vis, (cx, cy), (int(x), int(y)), (128, 128, 128), 1)

        # Line between both shapes
        cv2.line(vis,
                 (int(insec_inner[0]), int(insec_inner[1])),
                 (int(insec_outer[0]), int(insec_outer[1])), (0, 255, 0), 2)

        # Distance
        cv2.putText(vis, str(dists[i]), (int(x), int(y)),
                    cv2.FONT_HERSHEY_COMPLEX, 0.75, (0, 255, 0), 2)

# Output angles and distances
print(np.vstack([angles, dists]).T)

# Just for visualization purposes: Output image
cv2.imshow('Output', vis)
cv2.waitKey(0)
cv2.destroyAllWindows()

I generated some examplary output for visualization purposes:我为可视化目的生成了一些示例输出:

输出

And, here's an excerpt from the output, showing angle and the corresponding distance:而且,这是输出的摘录,显示了角度和相应的距离:

[[  0  70]
 [  4  71]
 [  8  73]
 [ 12  76]
 [ 16  77]
 ...
 [340  56]
 [344  59]
 [348  62]
 [352  65]
 [356  67]]

Hopefully, the code is self-explanatory.希望代码是不言自明的。 If not, please don't hesitate to ask questions.如果没有,请不要犹豫,提出问题。 I'll gladly provide further information.我很乐意提供进一步的信息。

----------------------------------------
System information
----------------------------------------
Platform:      Windows-10-10.0.16299-SP0
Python:        3.9.1
NumPy:         1.20.2
OpenCV:        4.5.1
Shapely:       1.7.1
----------------------------------------

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM