简体   繁体   English

使用ORB的OpenCV图像Aligment

[英]OpenCV Image Aligment using ORB

I need to precisely align two images. 我需要精确对齐两个图像。 To do that I am using Enhanced Correlation Coefficient (ECC) . 为此,我使用增强相关系数(ECC) Which gives me great results except for images that are rotated a lot. 除了旋转很多的图像外,这给了我很好的结果。 For example if the Reference image (base image) and tested image (that I want to align) are rotated by 90 degrees ECC method doesn't work which is right according to the documentation of findTransformECC() which says 例如,如果参考图像(基本图像)和测试图像(我想对齐)旋转90度ECC方法不起作用,这是正确的,根据findTransformECC()的文档说

Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (eg, a simple euclidean/similarity transform that allows for the images showing the same image content approximately). 注意,如果图像经历强位移/旋转,则需要粗略地对准图像的初始变换(例如,允许图像近似地显示相同图像内容的简单欧几里德/相似变换)。

So I have to use feature point based alignment method to do some rough alignment. 所以我必须使用基于特征点的对齐方法来做一些粗略的对齐。 I tried both SIFT and ORB and I am facing same problem with both. 我尝试了SIFT和ORB,但我遇到了同样的问题。 It works fine for some images and for others the resulting transformation is shifted or rotated on wrong side. 它适用于某些图像,对于其他图像,所产生的转换在错误的一侧移动或旋转。

These are input images: 这些是输入图像: 参考图片 要对齐的图像

I thought that the problem is caused by wrong matches but if I use just 10 keypoints with smaller distance it seems to me that all of them are good matches(I exactly the same result when I use 100 keypoints) 我认为这个问题是由错误的匹配引起的,但如果我只使用距离较小的10个关键点,那么在我看来它们都是很好的匹配(当我使用100个关键点时,我的结果完全相同)

This is the result of matching: 这是匹配的结果: 在此输入图像描述

This is the result: 这是结果: 结果

If you compare the rotated image it is shifted to the right and upside down. 如果比较旋转的图像,它会向右移动并上下颠倒。 What am I missing? 我错过了什么?

This is my code: 这是我的代码:

        # Initiate detector
    orb = cv2.ORB_create()

    # find the keypoints with ORB
    kp_base = orb.detect(base_gray, None)
    kp_test = orb.detect(test_gray, None)

    # compute the descriptors with ORB
    kp_base, des_base = orb.compute(base_gray, kp_base)
    kp_test, des_test = orb.compute(test_gray, kp_test)

    # Debug print
    base_keypoints = cv2.drawKeypoints(base_gray, kp_base, color=(0, 0, 255), flags=0, outImage=base_gray)
    test_keypoints = cv2.drawKeypoints(test_gray, kp_test, color=(0, 0, 255), flags=0, outImage=test_gray)

    output.debug_show("Base image keypoints",base_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)
    output.debug_show("Test image keypoints",test_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)

    # find matches
    # create BFMatcher object
    bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
    # Match descriptors.
    matches = bf.match(des_base, des_test)
    # Sort them in the order of their distance.
    matches = sorted(matches, key=lambda x: x.distance)


    # Debug print - Draw first 10 matches.
    number_of_matches = 10
    matches_img = cv2.drawMatches(base_gray, kp_base, test_gray, kp_test, matches[:number_of_matches], flags=2, outImg=base_gray)
    output.debug_show("Matches", matches_img, debug_mode=debug_mode,fxy=fxy,waitkey=True)

    # calculate transformation matrix
    base_keypoints = np.float32([kp_base[m.queryIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
    test_keypoints = np.float32([kp_test[m.trainIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
    # Calculate Homography
    h, status = cv2.findHomography(base_keypoints, test_keypoints)
    # Warp source image to destination based on homography
    im_out = cv2.warpPerspective(test_gray, h, (base_gray.shape[1], base_gray.shape[0]))
    output.debug_show("After rotation", im_out, debug_mode=debug_mode, fxy=fxy)

The answer to this problem is both mundane and irritating. 这个问题的答案既平凡也有刺激性。 Assuming this is the same issue as what I've encountered (I think it is): 假设这与我遇到的问题相同(我认为是这样):

Problem and Explanation Images are saved by most cameras with EXIF tags that include an "Orientation" value. 问题和解释大多数相机使用包含“方向”值的EXIF标签保存图像。 Beginning with OpenCV 3.2, this orientation tag is automatically read-in when an image is loaded with cv.imread(), and the image is oriented based on the tag (there are 8 possible orientations, which include 90* rotations, mirroring and flipping). 从OpenCV 3.2开始,当使用cv.imread()加载图像时,此方向标记会自动读入,并且图像基于标记定向(有8种可能的方向,包括90 *旋转,镜像和翻转)。 Some image viewing applications (such as Image Viewer in Linux Mint Cinnamon, and Adobe Photoshop) will display images rotated in the direction of the EXIF Orientation tag. 某些图像查看应用程序(如Linux Mint Cinnamon中的图像查看器和Adobe Photoshop)将显示沿EXIF方向标记方向旋转的图像。 Other applications (such as QGIS and OpenCV < 3.2) ignore the tag. 其他应用程序(如QGIS和OpenCV <3.2)忽略标记。 If your Image 1 has an orientation tag, and Image 2 has an orientation tag, and you perform the alignment with ORB (I haven't tried SIFT for this) in OpenCV, your aligned Image 2 will appear with the correct orientation (that of Image 1) when opened in an application that reads the EXIF Orientation tag. 如果您的图像1具有方向标记,并且图像2具有方向标记,并且您在OpenCV中使用ORB执行了对齐(我没有尝试过SIFT),则对齐的图像2将以正确的方向显示(图1)在读取EXIF Orientation标签的应用程序中打开时。 However, if you open both images in an application that ignores the EXIF Orientation tag, then they will not appear to have the same orientation. 但是,如果在忽略EXIF方向标记的应用程序中打开两个图像,则它们看起来不会具有相同的方向。 This problem becomes even more pronounced when 1 image has an orientation tag and the other does not. 当1个图像具有方向标记而另一个图像不具有方向标记时,该问题变得更加明显。

One Possible Solution Remove the EXIF Orientation tags prior to reading the images into OpenCV. 一种可能的解决方案在将图像读入OpenCV之前,请删除EXIF Orientation标签。 Now, as of OpenCV 3.4 (maybe 3.3?) there is an option to load the images ignoring the tag, but when this is done, they are loaded as grayscale (1 channel), which is not helpful if you NEED color cv.imread('image.jpg',128) where 128 means "ignore orientation). So, I use pyexiv2 in python to remove the offending EXIF Orientation tag from my images: 现在,从OpenCV 3.4(可能是3.3?)开始,有一个选项可以加载忽略标签的图像,但是当这样做时,它们会被加载为灰度(1个通道),如果你需要颜色cv.imread('image.jpg',128)这没有用。 cv.imread('image.jpg',128)其中128表示“忽略方向”。所以,我在python中使用pyexiv2从我的图像中删除有问题的EXIF方向标记:

import pyexiv2
image = path_to_image
imageMetadata = pyexiv2.ImageMetadata(image)
imageMetadata.read()
try:
    del imageMetadata['Exif.Image.Orientation']
    imageMetadata.write()
except:
    continue

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM