简体   繁体   English

与 ORB python opencv 匹配特性

[英]Matching Features with ORB python opencv

hi im working in Matching Features with ORB python opencv but when i run this code i get this error Traceback (most recent call last): File "ffl.py", line 27, in for m,n in matches: TypeError: 'cv2.DMatch' object is not iterable嗨,我正在使用 ORB python opencv 进行匹配功能,但是当我运行此代码时,我收到此错误回溯(最近一次调用最后一次):文件“ffl.py”,第 27 行,在匹配中的 m,n:TypeError:'cv2 .DMatch' 对象不可迭代

i don't know how to fix it我不知道怎么解决

import numpy as np
import cv2
import time

ESC=27   
camera = cv2.VideoCapture(0)
orb = cv2.ORB_create()
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

imgTrainColor = cv2.imread('/home/shar/home.jpg')
imgTrainGray = cv2.cvtColor(imgTrainColor, cv2.COLOR_BGR2GRAY)

kpTrain = orb.detect(imgTrainGray,None)
kpTrain, desTrain = orb.compute(imgTrainGray, kpTrain)

firsttime = True

while True:

    ret, imgCamColor = camera.read()
    imgCamGray = cv2.cvtColor(imgCamColor, cv2.COLOR_BGR2GRAY)
    kpCam = orb.detect(imgCamGray,None)
    kpCam, desCam = orb.compute(imgCamGray, kpCam)
    bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
    matches = bf.match(desCam,desTrain)
    good = []
    for m,n in matches:
        if m.distance < 0.7*n.distance:
            good.append(m)

    if firsttime==True:
        h1, w1 = imgCamColor.shape[:2]
        h2, w2 = imgTrainColor.shape[:2]
        nWidth = w1+w2
        nHeight = max(h1, h2)
        hdif = (h1-h2)/2
        firsttime=False

    result = np.zeros((nHeight, nWidth, 3), np.uint8)
    result[hdif:hdif+h2, :w2] = imgTrainColor
    result[:h1, w2:w1+w2] = imgCamColor

    for i in range(len(matches)):
        pt_a=(int(kpTrain[matches[i].trainIdx].pt[0]), int(kpTrain[matches[i].trainIdx].pt[1]+hdif))
        pt_b=(int(kpCam[matches[i].queryIdx].pt[0]+w2), int(kpCam[matches[i].queryIdx].pt[1]))
        cv2.line(result, pt_a, pt_b, (255, 0, 0))

    cv2.imshow('Camara', result)

    key = cv2.waitKey(20)                                 
    if key == ESC:
        break

cv2.destroyAllWindows()
camera.release()

bf.match return only a list of single objects, you cannot iterate over it with m,n. bf.match仅返回单个对象的列表,您不能使用 m,n 对其进行迭代。 Maybe you are confused with bf.knnMatch ?也许您对bf.knnMatch感到困惑?

You can just change your code to:您可以将代码更改为:

for m in matches:
    if m.distance < 0.7:
        good.append(m)

From the Python tutorials of OpenCV ( link ):来自 OpenCV 的 Python 教程( 链接):

The result of matches = bf.match(des1,des2) line is a list of DMatch objects. match = bf.match(des1,des2) 行的结果是一个 DMatch 对象列表。 This DMatch object has following attributes:此 DMatch 对象具有以下属性:

  • DMatch.distance - Distance between descriptors. DMatch.distance - 描述符之间的距离。 The lower, the better it is.越低越好。
  • DMatch.trainIdx - Index of the descriptor in train descriptors DMatch.trainIdx - 列车描述符中描述符的索引
  • DMatch.queryIdx - Index of the descriptor in query descriptors DMatch.queryIdx - 查询描述符中描述符的索引
  • DMatch.imgIdx - Index of the train image. DMatch.imgIdx - 训练图像的索引。
for m in matches:
    if m.distance < 0.7:
        good.append(m)

This block of code is good, but does not really have the same meaning as the original.这段代码很好,但实际上与原始代码的含义不同。 I think that using ORB and something involving n and n+1 elements in the matches refers to the original intent of SIFT algorithm, which performs a ratio match.我认为在匹配中使用 ORB 和涉及 n 和 n+1 元素的东西是指 SIFT 算法的初衷,它执行比率匹配。

So, a correct code would be (I guess) :所以,正确的代码是(我猜):

for i, m in enumerate(matches):
    if i < len(matches) - 1 and m.distance < 0.7 * matches[i+1].distance:
        good.append(m)

Less efficient, and there may be a workaround or better code.效率较低,可能有解决方法或更好的代码。 But my main point is to underline that the answered code is not doing the same as OP's code.但我的主要观点是要强调的是,回答代码是不是做一样的OP代码。

Original SIFT paper saying :原来的 SIFT 论文说:

This test rejects poor matches by computing the ratio between the best and second-best match.该测试通过计算最佳和次佳匹配之间的比率来拒绝差的匹配。 If the ratio is below some threshold, the match is discarded as being low-quality.如果该比率低于某个阈值,则匹配被视为低质量而被丢弃。

Please also note that "0.7" is named "ratio" and is fixed to 0.75 (from memory) in the original paper.另请注意,“0.7”被命名为“ratio”,并在原始论文中固定为 0.75(根据记忆)。

The code attempts to use Lowe's ratio test (see original SIFT paper).该代码尝试使用 Lowe 的比率测试(请参阅原始 SIFT 论文)。

this requires, for every descriptor, the two closest matches.对于每个描述符,这需要两个最接近的匹配项。

the code should read:代码应为:

    matches = bf.knnMatch(desCam, desTrain, k=2) # knnMatch is crucial
    good = []
    for (m1, m2) in matches: # for every descriptor, take closest two matches
        if m1.distance < 0.7 * m2.distance: # best match has to be this much closer than second best
            good.append(m1)

further, I would highly recommend the flann matcher.此外,我强烈推荐 flann matcher。 it's faster than the brute force matcher.它比蛮力匹配器更快。

look at the OpenCV tutorials or the samples directory in OpenCV's source ( samples/python/find_obj.py ) for code that works.查看OpenCV 教程或 OpenCV 源代码 ( samples/python/find_obj.py ) 中的示例目录以获取有效代码。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM