简体   繁体   中英

Affine transformation between contours in OpenCV

I have a historical time sequence of seafloor images scanned from film that need registration.

from pylab import *
import cv2
import urllib

urllib.urlretrieve('http://geoport.whoi.edu/images/frame014.png','frame014.png');
urllib.urlretrieve('http://geoport.whoi.edu/images/frame015.png','frame015.png');

gray1=cv2.imread('frame014.png',0)
gray2=cv2.imread('frame015.png',0)
figure(figsize=(14,6))
subplot(121);imshow(gray1,cmap=cm.gray);
subplot(122);imshow(gray2,cmap=cm.gray);

在此输入图像描述

I want to use the black region on the left of each image to do the registration, since that region was inside the camera and should be fixed in time. So I just need to compute the affine transformation between the black regions.

I determined these regions by thresholding and finding the largest contour:

def find_biggest_contour(gray,threshold=40):
    # threshold a grayscale image 
    ret,thresh = cv2.threshold(gray,threshold,255,1)
    # find the contours
    contours,h = cv2.findContours(thresh,mode=cv2.RETR_LIST,method=cv2.CHAIN_APPROX_NONE)
    # measure the perimeter
    perim = [cv2.arcLength(cnt,True) for cnt in contours]
    # find contour with largest perimeter
    i=perim.index(max(perim))
    return contours[i]

c1=find_biggest_contour(gray1)
c2=find_biggest_contour(gray2)

x1=c1[:,0,0];y1=c1[:,0,1]
x2=c2[:,0,0];y2=c2[:,0,1]

figure(figsize=(8,8))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1,y1,'b-')
imshow(gray2,cmap=cm.gray, alpha=0.5);plot(x2,y2,'g-')
axis([0,1500,1000,0]);

在此输入图像描述

The blue is the longest contour from the 1st frame, the green is the longest contour from the 2nd frame.

What is the best way to determine the rotation and offset between the blue and green contours?

I only want to use the right side of the contours in some region surrounding the step, something like the region between the arrows.

Of course, if there is a better way to register these images, I'd love to hear it. I already tried a standard feature matching approach on the raw images, and it didn't work well enough.

Following Shambool's suggested approach, here's what I've come up with. I used a Ramer-Douglas-Peucker algorithm to simplify the contour in the region of interest and identified the two turning points. I was going to use the two turning points to get my three unknowns (xoffset, yoffset and angle of rotation), but the 2nd turning point is a bit too far toward the right because RDP simplified away the smoother curve in this region. So instead I used the angle of the line segment leading up to the 1st turning point. Differencing this angle between image1 and image2 gives me the rotation angle. I'm still not completely happy with this solution. It worked well enough for these two images, but I'm not sure it will work well on the entire image sequence. We'll see.

It would really be better to fit the contour to the known shape of the black border.

# select region of interest from largest contour 
ind1=where((x1>190.) & (y1>200.) & (y1<900.))[0]
ind2=where((x2>190.) & (y2>200.) & (y2<900.))[0]
figure(figsize=(10,10))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1[ind1],y1[ind1],'b-')
imshow(gray2,cmap=cm.gray, alpha=0.5);plot(x2[ind2],y2[ind2],'g-')
axis([0,1500,1000,0])

在此输入图像描述

def angle(x1,y1):
    #  Returns angle of each segment along an (x,y) track
    return array([math.atan2(y,x) for (y,x) in zip(diff(y1),diff(x1))])

def simplify(x,y, tolerance=40, min_angle = 60.*pi/180.): 
    """
    Use the Ramer-Douglas-Peucker algorithm to simplify the path
    http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
    Python implementation: https://github.com/sebleier/RDP/
    """
    from RDP import rdp   
    points=vstack((x,y)).T
    simplified = array(rdp(points.tolist(), tolerance))
    sx, sy = simplified.T

    theta=abs(diff(angle(sx,sy)))
    # Select the index of the points with the greatest theta
    # Large theta is associated with greatest change in direction.
    idx = where(theta>min_angle)[0]+1
    return sx,sy,idx

sx1,sy1,i1 = simplify(x1[ind1],y1[ind1])
sx2,sy2,i2 = simplify(x2[ind2],y2[ind2])
fig = plt.figure(figsize=(10,6))
ax =fig.add_subplot(111)

ax.plot(x1, y1, 'b-', x2, y2, 'g-',label='original path')
ax.plot(sx1, sy1, 'ko-', sx2, sy2, 'ko-',lw=2, label='simplified path')
ax.plot(sx1[i1], sy1[i1], 'ro', sx2[i2], sy2[i2], 'ro', 
    markersize = 10, label='turning points')
ax.invert_yaxis()
plt.legend(loc='best')

在此输入图像描述

# determine x,y offset between 1st turning points, and 
# angle from difference in slopes of line segments approaching 1st turning point
xoff = sx2[i2[0]] - sx1[i1[0]]
yoff = sy2[i2[0]] - sy1[i1[0]]
iseg1 = [i1[0]-1, i1[0]]
iseg2 = [i2[0]-1, i2[0]]
ang1 = angle(sx1[iseg1], sy1[iseg1])
ang2 = angle(sx2[iseg2], sy2[iseg2])
ang = -(ang2[0] - ang1[0])
print xoff, yoff, ang*180.*pi

-28 14 5.07775871644

# 2x3 affine matrix M
M=array([cos(ang),sin(ang),xoff,-sin(ang),cos(ang),yoff]).reshape(2,3)
print M

[[  9.99959685e-01   8.97932821e-03  -2.80000000e+01]
 [ -8.97932821e-03   9.99959685e-01   1.40000000e+01]]

# warp 2nd image into coordinate frame of 1st
Minv = cv2.invertAffineTransform(M)
gray2b = cv2.warpAffine(gray2,Minv,shape(gray2.T))

figure(figsize=(10,10))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1[ind1],y1[ind1],'b-')
imshow(gray2b,cmap=cm.gray, alpha=0.5);
axis([0,1500,1000,0]);
title('image1 and transformed image2 overlain with 50% transparency');

在此输入图像描述

Good question.

One approach is to represent contours as 2d point clouds and then do registration. More simple and clear code in Matlab that can give you affine transform.

And more complex C++ code(using VXL lib) with python and matlab wrapper included. Or you can use some modificated ICP(iterative closest point) algorithm that is robust to noise and can handle affine transform.

Also your contours seems to be not very accurate so it can be a problem.

Another approach is to use some kind of registration that use pixel values. Matlab code (I think it's using some kind of minimizer+ crosscorrelation metric) Also maybe there is some kind of optical flow registration(or some other kind) that is used in medical imaging.

Also you can use point features as SIFT(SURF).

You can try it quick in FIJI(ImageJ) also this link .

  1. Open 2 images
  2. Plugins->feature extraction-> sift (or other)
  3. Set expected transformation to affine
  4. Look at estimated transformation model [3,3] homography matrix in ImageJ log. If it works good then you can implement it in python using OpenCV or maybe using Jython with ImageJ.

And it will be better if you post original images and describe all conditions (it seems that image is changing between frames)

You can represent these contours with their respective ellipses. These ellipses are centered on the centroid of the contour and they are oriented towards the main density axis. You can compare the centroids and the orientation angle.

1) Fill the contours => drawContours with thickness=CV_FILLED

2) Find moments => cvMoments()

3) And use them .

Centroid: { x, y } = {M10/M00, M01/M00 }

Orientation (theta): 在此输入图像描述

EDIT: I customized the sample code from legacy (enteringblobdetection.cpp) for your case.

            /* Image moments */
            double      M00,X,Y,XX,YY,XY;
            CvMoments   m;
            CvRect      r = ((CvContour*)cnt)->rect;
            CvMat       mat;
            cvMoments( cvGetSubRect(pImgFG,&mat,r), &m, 0 );
            M00 = cvGetSpatialMoment( &m, 0, 0 );
            X = cvGetSpatialMoment( &m, 1, 0 )/M00;
            Y = cvGetSpatialMoment( &m, 0, 1 )/M00;
            XX = (cvGetSpatialMoment( &m, 2, 0 )/M00) - X*X;
            YY = (cvGetSpatialMoment( &m, 0, 2 )/M00) - Y*Y;  
            XY = (cvGetSpatialMoment( &m, 1, 1 )/M00) - X*Y; 

            /* Contour description */
            CvPoint myCentroid(r.x+(float)X,r.y+(float)Y);
            double myTheta =  atan( 2*XY/(XX-YY) );

Also, check this with OpenCV 2.0 examples.

If you don't want to find the homography between the two images and want to find the affine transformation you have three unknowns, rotation angle (R), and the displacement in x and y (X,Y). Therefore minimum of two points (with two known values for each) are needed to find the unknowns. Two points should be matched between the two images or two lines, each has two known values, the intercept and slope. If you go with the point matching approach, the further the points are from each other the more robust is the found transform to noise (this is very simple if you remember error propagation rules).

In the two point matching method:

  1. find two points (A and B) in the first image I1 and their corresponding points (A',B') in the second image I2
  2. find the middle point between A and B: C, and the middle point between A' and B': C'
  3. the difference C and C' (C-C') gives the translation between the images (X and Y)
  4. using the dot product of CA and C'-A' you can find the rotation angle (R)

To detect robust points, I would find the the points along the side of counter you have found with highest absolute value of the second derivative (Hessian) and then try to match them. Since you mentioned this is a video footage you can easily make the assumption the transformation between each two frames is small to reject the outliers.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM