简体   繁体   English

如何用Python检测图像中的矩形项

[英]How to detect rectangular items in image with Python

I have found a plethora of questions regarding finding "things" in images using openCV, et al. 我发现了很多关于使用openCV等人在图像中找到“东西”的问题。 in Python but so far I have been unable to piece them together for a reliable solution to my problem. 在Python中,但到目前为止,我无法将它们拼凑在一起,以便为我的问题提供可靠的解决方案。

I am attempting to use computer vision to help count tiny surface mount electronics parts. 我正在尝试使用计算机视觉来帮助计算微小的表面贴装电子部件。 The idea is for me to dump parts onto a solid color piece of paper, snap a picture, and have the software tell me how many items are in it. 我的想法是让我将零件转移到一张纯色纸上,拍下一张照片,让软件告诉我里面有多少物品。

The "things" differ from one picture to the next but will always be identical in any one image. “事物”从一张图片到另一张图片不同,但在任何一张图片中总是相同的。 I seem to be able to manually tune the parameters for things like hue/saturation for a particular part but it tends to require tweaking every time I change to a new part. 我似乎能够手动调整特定部件的色调/饱和度等参数,但每次更换新部件时都需要调整。

My current, semi-functioning code is posted below: 我目前的半功能代码发布如下:

import imutils
import numpy
import cv2
import sys

def part_area(contours, round=10):
    """Finds the mode of the contour area.  The idea is that most of the parts in an image will be separated and that
    finding the most common area in the list of areas should provide a reasonable value to approximate by.  The areas
    are rounded to the nearest multiple of 200 to reduce the list of options."""
    # Start with a list of all of the areas for the provided contours.
    areas = [cv2.contourArea(contour) for contour in contours]
    # Determine a threshold for the minimum amount of area as 1% of the overall range.
    threshold = (max(areas) - min(areas)) / 100
    # Trim the list of areas down to only those that exceed the threshold.
    thresholded = [area for area in areas if area > threshold]
    # Round the areas to the nearest value set by the round argument.
    rounded = [int((area + (round / 2)) / round) * round for area in thresholded]
    # Remove any areas that rounded down to zero.
    cleaned = [area for area in rounded if area != 0]
    # Count the areas with the same values.
    counts = {}
    for area in cleaned:
        if area not in counts:
            counts[area] = 0
        counts[area] += 1
    # Reduce the areas down to only those that are in groups of three or more with the same area.
    above = []
    for area, count in counts.iteritems():
        if count > 2:
            for _ in range(count):
                above.append(area)
    # Take the mean of the areas as the average part size.
    average = sum(above) / len(above)
    return average

def find_hue_mode(hsv):
    """Given an HSV image as an input, compute the mode of the list of hue values to find the most common hue in the
    image.  This is used to determine the center for the background color filter."""
    pixels = {}
    for row in hsv:
        for pixel in row:
            hue = pixel[0]
            if hue not in pixels:
                pixels[hue] = 0
            pixels[hue] += 1
    counts = sorted(pixels.keys(), key=lambda key: pixels[key], reverse=True)
    return counts[0]


if __name__ == "__main__":
    # load the image and resize it to a smaller factor so that the shapes can be approximated better
    image = cv2.imread(sys.argv[1])

    # define range of blue color in HSV
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    center = find_hue_mode(hsv)
    print 'Center Hue:', center

    lower = numpy.array([center - 10, 50, 50])
    upper = numpy.array([center + 10, 255, 255])
    # Threshold the HSV image to get only blue colors
    mask = cv2.inRange(hsv, lower, upper)
    inverted = cv2.bitwise_not(mask)

    blurred = cv2.GaussianBlur(inverted, (5, 5), 0)
    edged = cv2.Canny(blurred, 50, 100)
    dilated = cv2.dilate(edged, None, iterations=1)
    eroded = cv2.erode(dilated, None, iterations=1)

    # find contours in the thresholded image and initialize the shape detector
    contours = cv2.findContours(eroded.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = contours[0] if imutils.is_cv2() else contours[1]

    # Compute the area for a single part to use when setting the threshold and calculating the number of parts within
    # a contour area.
    part_area = part_area(contours)
    # The threshold for a part's area - can't be too much smaller than the part itself.
    threshold = part_area * 0.5

    part_count = 0
    for contour in contours:
        if cv2.contourArea(contour) < threshold:
            continue

        # Sometimes parts are close enough together that they become one in the image.  To battle this, the total area
        # of the contour is divided by the area of a part (derived earlier).
        part_count += int((cv2.contourArea(contour) / part_area) + 0.1)  # this 0.1 "rounds up" slightly and was determined empirically

        # Draw an approximate contour around each detected part to give the user an idea of what the tool has computed.
        epsilon = 0.1 * cv2.arcLength(contour, True)
        approx = cv2.approxPolyDP(contour, epsilon, True)
        cv2.drawContours(image, [approx], -1, (0, 255, 0), 2)

    # Print the part count and show off the processed image.
    print 'Part Count:', part_count
    cv2.imshow("Image", image)
    cv2.waitKey(0)

Here's an example of the type of input image I am using: 这是我正在使用的输入图像类型的示例: 一些电容器 or this: 或这个: 一些电阻器

And I'm currently getting results like this: 我目前正在得到这样的结果: 在此输入图像描述

The results clearly show that the script is having trouble identifying some parts and it's true Achilles heel seems to be when parts touch one another. 结果清楚地表明,剧本在识别某些部件时遇到了麻烦,而且当部件彼此接触时,其真正的跟腱似乎就是这样。

So my question/challenge is, what can I do to improve the reliability of this script? 所以我的问题/挑战是,我该怎么做才能提高这个脚本的可靠性?

The script is to be integrated into an existing Python tool so I am searching for a solution using Python. 该脚本将集成到现有的Python工具中,因此我正在使用Python搜索解决方案。 The solution does not need to be pure Python as I am willing to install whatever 3rd party libraries might be needed. 因为我愿意安装任何第三方库可能需要的解决方案并不需要是纯Python。

If the objects are all of similar types, you might have more success isolating a single example in the image and then using feature matching to detect them. 如果对象都是相似的类型,则可能更成功地隔离图像中的单个示例,然后使用特征匹配来检测它们。

A full solution would be out of scope for Stack Overflow, but my suggestion for progress would be to first somehow find one or more "correct" examples using your current rectangle retrieval method. 完整的解决方案将超出Stack Overflow的范围,但我对进展的建议是首先使用您当前的矩形检索方法以某种方式找到一个或多个“正确”示例。 You could probably look for all your samples that are of the expected size, or that are accurate rectangles. 您可以查找具有预期大小或准确矩形的所有样本。

Once you have isolated a few positive examples, use some feature matching techniques to find the others. 一旦您隔离了一些正面示例,请使用一些功能匹配技术来查找其他示例。 There is a lot of reading up you probably need to do on it but that is a potential solution. 你可能需要做很多阅读,但这是一个潜在的解决方案。

A general summary is that you use your positive examples to find "features" of the object you want to detect. 总结是您使用正例来查找要检测的对象的“特征”。 These "features" are generally things like corners or changes in gradient. 这些“特征”通常是角落或渐变的变化。 OpenCV contains many methods you can use. OpenCV包含许多可以使用的方法。

Once you have the features, there are several algorithms in OpenCV you can look at that will search the image for all matching features. 获得这些功能之后,您可以在OpenCV中查看几种算法,这些算法将在图像中搜索所有匹配的功能。 You'll want one that is rotation invariant (can detect the same features arranged in different rotation), but you probably don't need scale invariance (can detect the same features at multiple scales). 您将需要一个旋转不变的(可以检测以不同旋转排列的相同特征),但您可能不需要缩放不变性(可以在多个尺度上检测相同的特征)。

My one concern with this method is that the items you are searching for in your images are quite small. 我对这种方法的一个顾虑是,你在图像中搜索的项目非常小。 It might be difficult to find good, consistent features to match on. 可能很难找到匹配的良好,一致的功能。

You're tackling a 2D object recognition problem, for which there are many possible approaches. 您正在处理2D对象识别问题,有许多可能的方法。 You've gone about it using background/foreground segmentation, which is ok as you have control on the scene (laying down the background paper sheet). 你已经开始使用背景/前景分割,这是好的,因为你可以控制场景(放下背景纸张)。 However this will always have fundamental limitations when the objects touch. 然而,当物体接触时,这将始终具有基本限制。 A simple solution to your problem can be this: 解决问题的简单方法是:

1) You assume that touching objects are rare events (which is a fine assumption in your problem). 1)您认为触摸物体是罕见事件(这是您问题中的一个很好的假设)。 Therefore you can compute the areas for each segmented region, and compute the median of these, which will give a robust estimate for the object's area. 因此,您可以计算每个分割区域的面积,并计算这些区域的中位数,这将为对象的区域提供稳健的估计。 Let's call this robust estimate A (in squared pixels). 我们称之为强健估计A(以平方像素为单位)。 This will be fine if fewer than 50% of regions correspond to touching objects. 如果少于50%的区域对应于触摸物体,这将是好的。

2) You then proceed to measure the number of objects in each segmented region. 2)然后,您继续测量每个分段区域中的对象数量。 Let Ai be the area of the ith region. 让Ai成为第i个区域。 You then compute the number of objects in each region by Ni=round(Ai/A). 然后,您可以通过Ni = round(Ai / A)计算每个区域中的对象数。 You then sum Ni to give you the total number of objects. 然后你总和Ni来给你一个对象的总数。

This approach will be fine as long as the following conditions are met: A) The touching objects do not significantly overlap B) You do not have objects lying on their sides. 只要满足以下条件,此方法就可以了:A)触摸物体不会明显重叠B)您的侧面没有物体。 If you do you might be able to deal with this using two area estimates (side and flat). 如果你这样做,你可以使用两个区域估计(侧面和平面)来处理这个问题。 Better to eliminate this scenario if you can for simplicity. 如果您可以简单,那么最好消除这种情况。 C) The objects are all roughly the same distance to the camera. C)物体与相机的距离大致相同。 If this is not the case then the areas of the objects (in pixels) cannot be modelled well by a single value. 如果不是这种情况,那么对象的区域(以像素为单位)不能通过单个值很好地建模。 D) There are not partially visible objects at the borders of the image. D)图像边界处没有部分可见的物体。 E) You ensure that only the same type of object is visible in each image. E)确保每个图像中只显示相同类型的对象。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM