簡體   English   中英

如何為我的拼接項目在 OpenCV Python 中的代碼中添加額外的照片?

[英]How do I add additional photos to my code in OpenCV Python for my Stitching Project?

日志1 日志2 日志3 老實說,我很迷茫。 我終於能夠將這兩張照片拼接在一起,但不確定如何更新我的代碼以合並兩張以上的照片。 我將如何更改我的代碼以允許多個圖片拼接? 以下是我目前所擁有的,我應該提到我使用的圖片質量很差,所以我發現的其他更簡單的例子要么不起作用,要么無法使用我需要的所有圖片。 如果有人能給我一個關於如何開始為最多五張圖片更改此代碼的大致方向,我將不勝感激。

import cv2
import numpy as np
import matplotlib.pyplot as plt
import imageio
cv2.ocl.setUseOpenCL(False)
import warnings
warnings.filterwarnings('ignore')

#sift is a feature descriptor that helps locate pixel coordinates i.e. corner detector
feature_extraction_algo = 'sift'

feature_to_match = 'bf'
#train image needs to be the one transformed
train_photo = cv2.imread('Stitching/Images/Log771/Log2.bmp')

#converting from BGR to RGB for Matplotlib
train_photo = cv2.cvtColor(train_photo, cv2.COLOR_BGR2RGB)
train_photo_crop = train_photo[0:10000, 425:750]

#converting to gray scale
train_photo_gray = cv2.cvtColor(train_photo_crop, cv2.COLOR_RGB2GRAY)


#Do the same for the query image
query_photo = cv2.imread('Stitching/Images/Log771/Log3.bmp')
query_photo = cv2.cvtColor(query_photo, cv2.COLOR_BGR2RGB)
query_photo_crop = query_photo[0:10000, 425:750]
query_photo_gray = cv2.cvtColor(query_photo_crop, cv2.COLOR_RGB2GRAY)

#crop both images



#view/plot images
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, constrained_layout=False, figsize=(16,9))
ax1.imshow(query_photo_crop, cmap="gray")
ax1.set_xlabel("Query Image", fontsize=14)

ax2.imshow(train_photo_crop, cmap="gray")
ax2.set_xlabel("Train Image", fontsize=14)
plt.savefig("./"+'.jpg', bbox_inches='tight', dpi=300, optimize=True, format='jpeg')
plt.show()

#sift.detectAndCompute() gets keypoints and descriptors--helps to determine how similar or different keypoints are-- ie. one picture is 
#huge and one is small. Keypoints match but are not similar enough, which is where descriptors come in. 
#to compare the keypoints in vector format

def select_descriptor_methods(image, method=None):
    assert method is not None, "Please define a feature descriptor method. accepted Values are: 'sift, 'surf'"
    if method == 'sift':
        descriptor = cv2.SIFT_create()
    elif method == 'surf':
        descriptor = cv2.SURF_create()
    elif method == 'brisk':
        descriptor = cv2.BRISK_create() 
    elif method =='orb':
        descriptor = cv2.ORB_create()
    (keypoints, features) = descriptor.detectAndCompute (image, None)
    return (keypoints, features)

keypoints_train_img, features_train_img = select_descriptor_methods(train_photo_gray, method=feature_extraction_algo)

keypoints_query_img, features_query_img = select_descriptor_methods(query_photo_gray, method=feature_extraction_algo)

for keypoint in keypoints_query_img:
    x,y = keypoint.pt
    size = keypoint.size 
    orientation = keypoint.angle
    response = keypoint.response 
    octave = keypoint.octave
    class_id = keypoint.class_id
print (x,y)
 
print(size)

print(orientation)

print(response)
print(octave)
print(class_id)


print(len(keypoints_query_img))
features_query_img.shape
#Noting a basic fact that - SIFT descriptor is computed for every key-point detected in the image. 
#Before computing descriptor, you probably used a detector (as Harris, Sift or Surf Detector) to detect points of interest. Detecting key-points and computing descriptors are two independent steps!

#drawing keypoints using drawKeypoints(input image, 
# keypoints, output image, color, flag) -- keypoints based off input picture
#Displaying keypoints and features on both detected images
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20,8), constrained_layout=False)
ax1.imshow(cv2.drawKeypoints(train_photo_gray, keypoints_query_img, None, color=(0,255,0)))
ax1.set_xlabel("(a)", fontsize=14)
ax2.imshow(cv2.drawKeypoints(query_photo_gray, keypoints_query_img,None,color=(0,255,0)))
ax2.set_xlabel("(b)", fontsize=14)
plt.savefig("./Stitching/" + feature_extraction_algo + "Images" + '.jpg', bbox_inches='tight', dpi=300, optimize=True, format='jpg')
plt.show()

def create_matching_object(method,crossCheck):
    "Create and return a Matcher Object"
    
    # For BF matcher, first we have to create the BFMatcher object using cv2.BFMatcher(). 
    # It takes two optional params. 
    # normType - It specifies the distance measurement
    # crossCheck - which is false by default. If it is true, Matcher returns only those matches 
    # with value (i,j) such that i-th descriptor in set A has j-th descriptor in set B as the best match 
    # and vice-versa. 
    if method == 'sift' or method == 'surf':
        bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=crossCheck)
    elif method == 'orb' or method == 'brisk':
        bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=crossCheck)
    return bf

def key_points_matching(features_train_img, features_query_img, method):
    bf = create_matching_object(method, crossCheck=True)
        
    # Match descriptors.
    best_matches = bf.match(features_train_img,features_query_img)
    
    # Sort the features in order of distance.
    # The points with small distance (more similarity) are ordered first in the vector
    rawMatches = sorted(best_matches, key = lambda x:x.distance)
    print("Raw matches with Brute force):", len(rawMatches))
    return rawMatches

def key_points_matching_KNN(features_train_img, features_query_img, ratio, method):
    bf = create_matching_object(method, crossCheck=False)
    # compute the raw matches and initialize the list of actual matches
    rawMatches = bf.knnMatch(features_train_img, features_query_img, k=2)
    print("Raw matches (knn):", len(rawMatches))
    matches = []
#loop over raw matches
    for m,n in rawMatches:
        # ensure the distance is within a certain ratio of each
        # other (i.e. Lowe's ratio test)
        if m.distance < n.distance * ratio:
            matches.append(m)
    return matches


print("Drawing: {} matched features Lines".format(feature_to_match))

fig = plt.figure(figsize=(20,8))

if feature_to_match == 'bf':
    matches = key_points_matching(features_train_img, features_query_img, method=feature_extraction_algo)
    
    mapped_features_image = cv2.drawMatches(train_photo_crop,keypoints_train_img,query_photo_crop,keypoints_query_img,matches[:100],None,flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

# Now for cross checking draw the feature-mapping lines also with KNN
elif feature_to_match == 'knn':
    matches = key_points_matching_KNN(features_train_img, features_query_img, ratio=0.75, method=feature_extraction_algo)
    
    mapped_features_image_knn = cv2.drawMatches(train_photo_crop, keypoints_train_img, query_photo_crop, keypoints_query_img, np.random.choice(matches,50),None,flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
    

plt.imshow(mapped_features_image)
plt.axis('off')
plt.savefig("./Stitching/" + feature_to_match + "_matching_img_log_"+'.jpeg', bbox_inches='tight', dpi=300, optimize=True, format='jpeg')
plt.show()
feature_to_match = 'knn'

print("Drawing: {} matched features Lines".format(feature_to_match))

fig = plt.figure(figsize=(20,8))

if feature_to_match == 'bf':
    matches = key_points_matching(features_train_img, features_query_img, method=feature_extraction_algo)
    
    mapped_features_image = cv2.drawMatches(train_photo_crop,keypoints_train_img,query_photo_crop,keypoints_query_img,matches[:100],
                           None,flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

# Now for cross checking draw the feature-mapping lines also with KNN
elif feature_to_match == 'knn':
     matches = key_points_matching_KNN(features_train_img, features_query_img, ratio=0.75, method=feature_extraction_algo)
    
     mapped_features_image_knn = cv2.drawMatches(train_photo_crop, keypoints_train_img, query_photo_crop, keypoints_query_img, np.random.choice(matches,100),None,flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
    

plt.imshow(mapped_features_image_knn)
plt.axis('off')
plt.savefig("./Stitching/" + feature_to_match + "_Images"+'.jpg', bbox_inches='tight', dpi=300, optimize=True, format='jpg')
plt.show()

def homography_stitching(keypoints_train_img, keypoints_query_img, matches, reprojThresh):
    
    keypoints_train_img = np.float32([keypoint.pt for keypoint in keypoints_train_img])
    keypoints_query_img = np.float32([keypoint.pt for keypoint in keypoints_query_img])
    
    ''' For findHomography() - I need to have an assumption of a minimum of correspondence points that are present between the 2 images. Here, I am assuming that Minimum Match Count to be 4 '''
    
    if len(matches) > 4:
    # construct the two sets of points
        points_train = np.float32([keypoints_train_img[m.queryIdx] for m in matches])
        points_query = np.float32([keypoints_query_img[m.trainIdx] for m in matches])
        
     # Calculate the homography between the sets of points
        (H, status) = cv2.findHomography(points_train, points_query, cv2.RANSAC, reprojThresh)

        return (matches, H, status)
    else: 
     return None
   
    
    
    
M = homography_stitching(keypoints_train_img, keypoints_query_img, matches, reprojThresh=4)

if M is None:
    print("Error!")

(matches, Homography_Matrix, status) = M

print(Homography_Matrix)

#Finally, we can apply our transformation by calling the cv2.warpPerspective function. The first parameter is our 
# original image that we want to warp, 
#the second is our transformation matrix M (which will be obtained from homography_stitching), 
#and the final parameter is a tuple, used to indicate the width and height of the output image.

# For the calculation of the width and height of the final horizontal panoramic images 
# I can just add the widths of the individual images and for the height
# I can take the max from the 2 individual images.

width = query_photo_crop.shape[1] + train_photo_crop.shape[1]
print("width ", width) 
# 2922 - Which is exactly the sum value of the width of 
# my train.jpg and query.jpg


height = max(query_photo_crop.shape[0], train_photo_crop.shape[0])

# otherwise, apply a perspective warp to stitch the images together

# Now just plug that "Homography_Matrix"  into cv::warpedPerspective and I shall have a warped image1 into image2 frame

result = cv2.warpPerspective(train_photo_crop, Homography_Matrix,  (width, height))

# The warpPerspective() function returns an image or video whose size is the same as the size of the original image or video. Hence set the pixels as per my query_photo

result[0:query_photo_crop.shape[0], 0:query_photo_crop.shape[1]] = query_photo_crop

plt.figure(figsize=(20,10))
plt.axis('off')
plt.imshow(result)

imageio.imwrite("./Stitching/Images/Log771/finishedLog"+'.jpg', result)

plt.show()

簡單方法:

cv2.Stitcher_create模塊的幫助下,OpenCV 讓您變得輕松。 一切都在內部處理,從識別關鍵特征點到適當匹配它們,最后扭曲圖像。 您可以傳入 2 個以上的圖像進行拼接。 但我必須警告您,圖像數量和/或尺寸越大; 計算所需的時間越長。

如何使用cv2.Stitcher_create模塊?

首先,我們需要創建一個類的實例。

imageStitcher = cv2.Stitcher_create()

要獲取與此類關聯的所有函數的列表,只需鍵入help(imageStitcher) 它將返回所有函數的列表以及所需的輸入參數和預期的輸出。

創建的實例包含一個函數stitch() ,用於創建全景圖。 可以通過以下兩種方式之一使用stitch

  1. ret, panorama_image = stitch(images)

通過所有圖像的列表,模塊識別關鍵特征,匹配它們並產生扭曲的圖像

  1. ret, panorama_image = stitch(images, masks)

可選地,我們還可以傳遞一個掩碼列表,每個圖像對應一個掩碼。 蒙版是由黑白組成的二值圖像。 該模塊僅在每個蒙版的白色區域中尋找關鍵點/特征,繼續匹配它們並產生扭曲的圖像。

上述兩種方式都返回一個變量ret ,如果值為0則表示拼接執行沒有任何問題。

以下代碼示例(我借用)顯示了第一種方法:

要拼接的示例圖像:

在此處輸入圖像描述

代碼:

import os
import cv2

# path containing images to be stitched
path = 'stitching_images'

# append all the images within the path to a list
images = []
for image_file in os.listdir(path):
  if image_file.endswith ('.jpg'):
    img = cv2.imread(os.path.join(path, image_file))
    images.append(img)

# creating instance of stitcher class
imageStitcher = cv2.Stitcher_create()

# call the 'stitch' function and pass in the list of images
status, stitched_img = imageStitcher.stitch(images)

# display the panorama if stitching is successful
if status = 0:
    cv2.imshow('Panorama', stitched_img)

結果:

在此處輸入圖像描述

(從https://github.com/niconielsen32/ComputerVision/tree/master/imageStitching借來的代碼和圖片)

硬法:

如果您想使用您的代碼創建全景圖,我建議您按順序執行:

  • 遍歷集合中的所有圖像(例如 A、B、C、D、E)
  • 對於每對圖像,找到關鍵點並匹配它們。 (AB、AC、AD、AE、BC、BD 等。)
  • 選擇關鍵點匹配最高的對並使用單應性縫合它們(例如圖像 A 和 C 被扭曲到 P1)
  • 接下來找到拼接圖像與您的集合中的所有其他圖像(P1B、P1D、P1E)之間的關鍵點匹配
  • 找到匹配次數最多的對並縫合它們
  • 同樣重復

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM