简体   繁体   English

模块'对象没有属性'drawMatches' opencv python

[英]module' object has no attribute 'drawMatches' opencv python

I am just doing an example of feature detection in OpenCV.我只是在 OpenCV 中做一个特征检测的例子。 This example is shown below.这个例子如下所示。 It is giving me the following error它给了我以下错误

module' object has no attribute 'drawMatches'模块'对象没有属性'drawMatches'

I have checked the OpenCV Docs and am not sure why I'm getting this error.我已经检查了 OpenCV 文档,但不确定为什么会出现此错误。 Does anyone know why?有谁知道为什么?

import numpy as np
import cv2
import matplotlib.pyplot as plt

img1 = cv2.imread('box.png',0)          # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage

# Initiate SIFT detector
orb = cv2.ORB()

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)

plt.imshow(img3),plt.show()

Error:错误:

Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'

I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the drawMatches function doesn't exist in my distribution.我也迟到了,但我为 Mac OS X 安装了 OpenCV 2.4.9,并且我的发行版中不存在drawMatches函数。 I've also tried the second approach with find_obj and that didn't work for me either.我也尝试过使用find_obj的第二种方法, find_obj对我也不起作用。 With that, I decided to write my own implementation of it that mimics drawMatches to the best of my ability and this is what I've produced.有了这个,我决定编写自己的实现,尽我所能模仿drawMatches ,这就是我制作的。

I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise.我提供了我自己的图像,其中一张是摄影师,另一张是相同的图像,但逆时针旋转了 55 度。

The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together.我写的基础知识是我分配了一个输出 RGB 图像,其中行数是两个图像中的最大值,以适应将两个图像放置在输出图像中,而列只是两列的总和. Be advised that I assume that both images are grayscale.请注意,我假设两个图像都是灰度的。

I place each image in their corresponding spots, then run through a loop of all of the matched keypoints.我将每个图像放在相应的位置,然后遍历所有匹配的关键点。 I extract which keypoints matched between the two images, then extract their (x,y) coordinates.我提取两个图像之间匹配的关键点,然后提取它们的(x,y)坐标。 I draw circles at each of the detected locations, then draw a line connecting these circles together.我在每个检测到的位置画圆,然后画一条线将这些圆连接在一起。

Bear in mind that the detected keypoint in the second image is with respect to its own coordinate system.请记住,第二张图像中检测到的关键点与其自身的坐标系有关。 If you want to place this in the final output image, you need to offset the column coordinate by the amount of columns from the first image so that the column coordinate is with respect to the coordinate system of the output image.如果要将其放置在最终输出图像中,则需要将列坐标偏移第一个图像的列数,以便列坐标相对于输出图像的坐标系。

Without further ado:事不宜迟:

import numpy as np
import cv2

def drawMatches(img1, kp1, img2, kp2, matches):
    """
    My own implementation of cv2.drawMatches as OpenCV 2.4.9
    does not have this function available but it's supported in
    OpenCV 3.0.0

    This function takes in two images with their associated 
    keypoints, as well as a list of DMatch data structure (matches) 
    that contains which keypoints matched in which images.

    An image will be produced where a montage is shown with
    the first image followed by the second image beside it.

    Keypoints are delineated with circles, while lines are connected
    between matching keypoints.

    img1,img2 - Grayscale images
    kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint 
              detection algorithms
    matches - A list of matches of corresponding keypoints through any
              OpenCV keypoint matching algorithm
    """

    # Create a new output image that concatenates the two images together
    # (a.k.a) a montage
    rows1 = img1.shape[0]
    cols1 = img1.shape[1]
    rows2 = img2.shape[0]
    cols2 = img2.shape[1]

    # Create the output image
    # The rows of the output are the largest between the two images
    # and the columns are simply the sum of the two together
    # The intent is to make this a colour image, so make this 3 channels
    out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')

    # Place the first image to the left
    out[:rows1,:cols1] = np.dstack([img1, img1, img1])

    # Place the next image to the right of it
    out[:rows2,cols1:] = np.dstack([img2, img2, img2])

    # For each pair of points we have between both images
    # draw circles, then connect a line between them
    for mat in matches:

        # Get the matching keypoints for each of the images
        img1_idx = mat.queryIdx
        img2_idx = mat.trainIdx

        # x - columns
        # y - rows
        (x1,y1) = kp1[img1_idx].pt
        (x2,y2) = kp2[img2_idx].pt

        # Draw a small circle at both co-ordinates
        # radius 4
        # colour blue
        # thickness = 1
        cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)   
        cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)

        # Draw a line in between the two points
        # thickness = 1
        # colour blue
        cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)


    # Show the image
    cv2.imshow('Matched Features', out)
    cv2.waitKey(0)
    cv2.destroyWindow('Matched Features')

    # Also return the image if you'd like a copy
    return out

To illustrate that this works, here are the two images that I used:为了说明这是有效的,这里是我使用的两个图像:

摄影师形象

旋转的摄影师图像

I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor.我使用 OpenCV 的 ORB 检测器来检测关键点,并使用归一化的汉明距离作为相似性的距离度量,因为这是一个二进制描述符。 As such:因此:

import numpy as np
import cv2

img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale

# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)

# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)

# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)

# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Do matching
matches = bf.match(des1,des2)

# Sort the matches based on distance.  Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)

# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])

This is the image I get:这是我得到的图像:

匹配功能


To use with knnMatch from cv2.BFMatcher若要使用knnMatchcv2.BFMatcher

I'd like to make a note where the above code only works if you assume that the matches appear in a 1D list.我想说明上面的代码仅在您假设匹配项出现在 1D 列表中时才有效。 However, if you decide to use the knnMatch method from cv2.BFMatcher for example, what is returned is a list of lists.但是,如果你决定使用knnMatch从方法cv2.BFMatcher例如,什么是返回是一个列表的列表。 Specifically, given the descriptors in img1 called des1 and the descriptors in img2 called des2 , each element in the list returned from knnMatch is another list of k matches from des2 which are the closest to each descriptor in des1 .具体来说,给定img1称为des1的描述符和img2称为des2的描述符,从knnMatch返回的列表中的每个元素都是来自knnMatchk匹配项的另一个列表,它们与des2中的每个描述符des1 Therefore, the first element from the output of knnMatch is a list of k matches from des2 which were the closest to the first descriptor found in des1 .因此,从输出的第一个元素knnMatch是列表k从匹配des2其是最接近中找到的第一个描述符des1 The second element from the output of knnMatch is a list of k matches from des2 which were the closest to the second descriptor found in des1 and so on.从输出端上的第二元件knnMatch是列表k从匹配des2其是最接近第二描述符中找到的des1等。

To make the most sense of knnMatch , you must limit the total amount of neighbours to match to k=2 .为了充分利用knnMatch ,您必须将要匹配的邻居总数限制为k=2 The reason why is because you want to use at least two matched points for each source point available to verify the quality of the match and if the quality is good enough, you'll want to use these to draw your matches and show them on the screen.原因是因为您希望为每个可用的源点使用至少两个匹配点来验证匹配的质量,如果质量足够好,您将希望使用这些来绘制匹配并将它们显示在屏幕。 You can use a very simple ratio test (credit goes to David Lowe ) to ensure that for a point, we see that the distance / dissimilarity in matching to the best point is much smaller than the distance / dissimilarity in matching to the second best point.您可以使用一个非常简单的比率测试(归功于David Lowe )来确保对于一个点,我们看到与最佳点匹配的距离/不相似度远小于与第二个最佳点匹配的距离/不相似度. We can capture this by finding the ratio of the distance of the best matched point to the second best matched point.我们可以通过找到最佳匹配点与第二个最佳匹配点的距离之比来捕捉这一点。 The ratio should be small to illustrate that a point to its best matched point is unambiguous.该比率应该很小,以说明一个点与其最佳匹配点是明确的。 If the ratio is close to 1, this means that both matches are equally as "good" and thus ambiguous so we should not include these.如果比率接近 1,这意味着两个匹配项都同样“好”,因此不明确,因此我们不应该包括这些。 We can think of this as an outlier rejection technique.我们可以将其视为一种异常值拒绝技术。 Therefore, to turn what is returned from knnMatch to what is required with the code I wrote above, iterate through the matches, use the above ratio test and check if it passes.因此,要将knnMatch返回的内容knnMatch为我上面编写的代码所需的内容,请遍历匹配项,使用上述比率测试并检查它是否通过。 If it does, add the first matched keypoint to a new list.如果是,则将第一个匹配的关键点添加到新列表中。

Assuming that you created all of the variables like you did before declaring the BFMatcher instance, you'd now do this to adapt the knnMatch method for using drawMatches :假设您像在声明BFMatcher实例之前一样创建了所有变量,您现在将执行此操作以调整knnMatch方法以使用drawMatches

# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance / n.distance < 0.75: # Or you can do m.distance < 0.75 * n.distance
       # Add the match for point m to the best 
       good.append(m)

# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]

# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)

As you iterate over the matches list, m and n should be the match between a point from des1 and its best match ( m ) and its second best match ( n ) both from des2 .当您遍历matches列表时, mn应该是des1一个点与其最佳匹配 ( m ) 及其第二个最佳匹配 ( n ) 之间的匹配,两者都来自des2 If we see that the ratio is small, we'll add this best match between the two points ( m ) to a final list.如果我们发现该比率很小,我们会将两点 ( m ) 之间的最佳匹配添加到最终列表中。 The ratio that I have, 0.75, is a parameter that needs tuning so if you're not getting good results, play around with this until you do.我拥有的比率 0.75 是一个需要调整的参数,因此如果您没有获得好的结果,请尝试使用它直到您成功为止。 However, values between 0.7 to 0.8 are a good start.但是,0.7 到 0.8 之间的值是一个好的开始。

I want to attribute the above modifications to user @ryanmeasel and the answer that these modifications were found is in his post: OpenCV Python : No drawMatchesknn function .我想将上述修改归因于用户@ryanmeasel,并且在他的帖子中找到了这些修改的答案: OpenCV Python : No drawMatchesknn function

The drawMatches Function is not part of the Python interface. drawMatches函数不是 Python 界面的一部分。
As you can see in the docs , it is only defined for C++ at the moment.正如您在docs 中看到的那样,它目前仅针对C++定义。

Excerpt from the docs:摘自文档:

 C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
 C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )

If the function had a Python interface, you would find something like this:如果该函数有一个 Python 接口,您会发现如下内容:

 Python: cv2.drawMatches(img1, keypoints1, [...]) 

EDIT编辑

There actually was a commit that introduced this function 5 months ago.实际上在 5 个月前有一个提交引入了这个功能。 However, it is not (yet) in the official documentation.但是,它(还)不在官方文档中。
Make sure you are using the newest OpenCV Version (2.4.7).确保您使用的是最新的 OpenCV 版本 (2.4.7)。 For sake of completeness the Functions interface for OpenCV 3.0.0 will looks like this :为了完整起见,OpenCV 3.0.0 的函数接口看起来像这样

cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg

I know this question has an accepted answer that is correct, but if you are using OpenCV 2.4.8 and not 3.0(-dev), a workaround could be to use some functions from the included samples found in opencv\\sources\\samples\\python2\\find_obj我知道这个问题有一个公认的正确答案,但是如果您使用的是 OpenCV 2.4.8 而不是 3.0(-dev),解决方法可能是使用opencv\\sources\\samples\\python2\\find_obj包含的示例中的一些函数opencv\\sources\\samples\\python2\\find_obj

import cv2
from find_obj import filter_matches,explore_match

img1 = cv2.imread('../c/box.png',0)          # queryImage
img2 = cv2.imread('../c/box_in_scene.png',0) # trainImage

# Initiate SIFT detector
orb = cv2.ORB()

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)#, crossCheck=True)

matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, matches)
explore_match('find_obj', img1,img2,kp_pairs)#cv2 shows image

cv2.waitKey()
cv2.destroyAllWindows()

This is the output image:这是输出图像:

在此处输入图片说明

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 OpenCV for Python - AttributeError:'module'对象没有属性'connectedComponents' - OpenCV for Python - AttributeError: 'module' object has no attribute 'connectedComponents' python opencv3.0.0-beta&#39;module&#39;对象没有属性&#39;createBackgroundSubtractorMOG()&#39; - python opencv3.0.0-beta 'module' object has no attribute 'createBackgroundSubtractorMOG()' AttributeError:&#39;module&#39;对象在Opencv python中没有属性&#39;BOWImgDescriptorExtractor&#39; - AttributeError: 'module' object has no attribute 'BOWImgDescriptorExtractor' in Opencv python python-opencv AttributeError:&#39;module&#39;对象没有属性&#39;createBackgroundSubtractorGMG&#39; - python-opencv AttributeError: 'module' object has no attribute 'createBackgroundSubtractorGMG' OpenCV Python AttributeError:&#39;module&#39;对象没有属性&#39;imshow&#39; - OpenCV Python AttributeError: 'module' object has no attribute 'imshow' python raspberry opencv:AttributeError:&#39;模块&#39;对象没有属性&#39;face&#39; - python raspberry opencv: AttributeError: 'module' object has no attribute 'face' Opencv AttributeError: 'module' object 没有属性 'data' - Opencv AttributeError: 'module' object has no attribute 'data' 在python opencv中使用drawKeypoints会产生错误:“模块”对象没有属性“ drawKeypoints” - Using drawKeypoints in python opencv gives error: 'module' object has no attribute 'drawKeypoints' 带有 AttributeError 的 OpenCV 的 Python 绑定:“模块”对象没有属性“FeatureDetector_create” - Python bindings for OpenCV with AttributeError: 'module' object has no attribute 'FeatureDetector_create' AttributeError: 'module' object 没有属性 'xfeatures2d' [Python/OpenCV 2.4] - AttributeError: 'module' object has no attribute 'xfeatures2d' [Python/OpenCV 2.4]
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM