简体   繁体   中英

Alternative approach to pixel matching

Hi I am trying to perform image alignment and focus stacking and I have achieved some results. The stacked image produced still has some noise and not the best result desired. As per my understanding it is because of the image alignment performed before stitching the images together. The noise can also be due to the approach used where the alignment is done using the pixel matching. I came across an article here: https://www.mfoot.com/blog/2011/07/08/enfuse-for-extended-dynamic-range-and-focus-stacking-in-microscopy/ This talks about an alternative approach where instead of the matching pixels from images it considers the pixels from local neighbourhood. And I can't find anything on this. Can someone guide me to any resources which might be helpful.

 detector = cv2.ORB_create(1000)
 image_1_kp, image_1_desc = detector.detectAndCompute(image1gray, None)

Alternative approach using Feature Matching instead of Texture (pixels)

I will give you a more general example using an OpenCV example for feature matching, then using the matched features as keypoints for estimating a transform that aligns the image.

As the example, given two images

box.png

在此处输入图片说明

box_in_scene.png

在此处输入图片说明

You can do feature matching in OpenCV as follows

import cv2
import numpy as np

img1 = cv2.imread("box.png",0)          # queryImage
img2 = cv2.imread("box_in_scene.png",0) # trainImage

H, W = img1.shape

# Initiate SIFT detector
orb = cv2.ORB_create(1000)

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

From there you can get the 10 best matching keypoints and use them to estimate the transform.

# Get 10 best matching keypoints
query_pts = np.array([np.array(kp1[match.queryIdx].pt) for match in matches[:10]])
train_pts = np.array([np.array(kp2[match.trainIdx].pt) for match in matches[:10]])

# Estimate transform
M = cv2.estimateAffine2D(train_pts, query_pts)[0]

# Warp image
img3 = cv2.warpAffine(img2, M, (W, H))

The aligned image should look like

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM