简体   繁体   中英

How to convert the c++ code to Python for automatic image rotation using OpenCV?

I want to do the following:

  1. Rotate the Incoming Image to align it perfectly with the Template Image .
  2. Use cv2.substrate() to compare the two aligned images & print out the difference.

I already have the Python code to do the image comparison:

import cv2
import numpy as np

image1 = cv2.imread('letter f5.png') 
image2 = cv2.imread('letter f.png') 

difference = cv2.subtract(image1, image2)

result = np.any(difference) 

if result is True:
    print ("The images are the same")
else:
    cv2.imshow("result.jpg", difference)
    print ("the images are different")

The image comparison works well if the two images are aligned. If the Incoming Image is off by 90 degree clockwise, the image comparison won't work.

So, how can I rotate this image:

顺时针旋转90度

To this:

对齐的传入图像

So that I'll be able to compare it with Template Image .

I have this c++ code:

#include <stdio.h>
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/imgproc/imgproc.hpp"

#define PI 3.14159265

using namespace cv;
using namespace std;


void rotate(cv::Mat& src, double angle, cv::Mat& dst)
{
    int len = std::max(src.cols, src.rows);
     cv::Point2f pt(len/2., len/2.);
     cv::Mat r = cv::getRotationMatrix2D(pt, angle, 1.0);

     cv::warpAffine(src, dst, r, cv::Size(len, len));
}



float angleBetween(const Point &v1, const Point &v2)
{
    float len1 = sqrt(v1.x * v1.x + v1.y * v1.y);
    float len2 = sqrt(v2.x * v2.x + v2.y * v2.y);

    float dot = v1.x * v2.x + v1.y * v2.y;

    float a = dot / (len1 * len2);

    if (a >= 1.0)
        return 0.0;
    else if (a <= -1.0)
        return PI;
    else{
        int degree;
        degree = acos(a)*180/PI;
        return degree;
        };
}



int main()
{

    Mat char1 = imread( "/Users/Rodrane/Documents/XCODE/OpenCV/mkedenemeleri/anarev/rotated.jpg",CV_LOAD_IMAGE_GRAYSCALE );

    Mat image = imread("/Users/Rodrane/Documents/XCODE/OpenCV/mkedenemeleri/anarev/gain2000_crop.jpg", CV_LOAD_IMAGE_GRAYSCALE );




    if( !char1.data )
    {
        std::cout<< "Error reading object " << std::endl;
        return -1;
    }

    GaussianBlur( char1, char1, Size(3, 3), 2, 2 );
    GaussianBlur( image, image, Size(3, 3), 2, 2 );
    adaptiveThreshold(char1,char1,255,CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY,9,14);
    adaptiveThreshold(image,image,255,CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY,9,14);

    //Detect the keypoints using SURF Detector
    int minHessian = 200;

    SurfFeatureDetector detector( minHessian );
    std::vector<KeyPoint> kp_object;

    detector.detect( char1, kp_object );

    //Calculate descriptors (feature vectors)
    SurfDescriptorExtractor extractor;
    Mat des_object;

    extractor.compute( char1, kp_object, des_object );

    FlannBasedMatcher matcher;


    namedWindow("Good Matches");

    std::vector<Point2f> obj_corners(4);

    //Get the corners from the object
    obj_corners[0] = cvPoint(0,0);
    obj_corners[1] = cvPoint( char1.cols, 0 );
    obj_corners[2] = cvPoint( char1.cols, char1.rows );
    obj_corners[3] = cvPoint( 0, char1.rows );



    Mat frame;




    Mat des_image, img_matches;
    std::vector<KeyPoint> kp_image;
    std::vector<vector<DMatch > > matches;
    std::vector<DMatch > good_matches;
    std::vector<Point2f> obj;
    std::vector<Point2f> scene;
    std::vector<Point2f> scene_corners(4);
    Mat H;


    detector.detect( image, kp_image );
    extractor.compute( image, kp_image, des_image );

    matcher.knnMatch(des_object, des_image, matches, 2);

    for(int i = 0; i < min(des_image.rows-1,(int) matches.size()); i++) //THIS LOOP IS SENSITIVE TO SEGFAULTS
    {
        if((matches[i][0].distance < 0.6*(matches[i][1].distance)) && ((int) matches[i].size()<=2 && (int) matches[i].size()>0))
        {
            good_matches.push_back(matches[i][0]);
        }
    }



    //Draw only "good" matches


    drawMatches( char1, kp_object, image, kp_image, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    if (good_matches.size() >= 4)
    {
        for( int i = 0; i < good_matches.size(); i++ )
        {
            //Get the keypoints from the good matches
            obj.push_back( kp_object[ good_matches[i].queryIdx ].pt );
            scene.push_back( kp_image[ good_matches[i].trainIdx ].pt );
            cout<<angleBetween(obj[i],scene[i])<<endl; //angles between images

        }

        H = findHomography( obj, scene, CV_RANSAC );


        perspectiveTransform( obj_corners, scene_corners, H);

       // cout<<angleBetween(obj[0], scene[0])<<endl;


        //Draw lines between the corners (the mapped object in the scene image )

    }

    //Show detected matches
    // resize(img_matches, img_matches, Size(img_matches.cols/2, img_matches.rows/2));

    imshow( "Good Matches", img_matches );
    waitKey();

    return 0;
}

How to rotate the Incoming Image automatically so that it will align perfectly with the Template Image. i have the following code which rotate the Incoming Image manually 90 degree anticlockwise

import numpy as np
import cv2

img = cv2.imread('letter defect f90.png',0)
rows,cols = img.shape

M = cv2.getRotationMatrix2D((cols/2,rows/2),90,1)
dst = cv2.warpAffine(img,M,(cols,rows))

img2 = cv2.imwrite('result_rotate.png',dst)

img3 = cv2.imread('letter f.png')
img4 = cv2.imread('result_rotate.png')

difference = cv2.subtract(img3, img4)

result = np.any(difference) 

if result is True:
    print ("The images are the same")
else:
    cv2.imshow("result.jpg", difference)
    print ("the images are different")

I was thinking of coming up with a solution after I got your comment. My answer may not be perfect but hope it gives some idea to a better solution.

Perform a contour operation of the image you intend to find rotation for. Fit an ellipse around the contour you have obtained. Now based on the obtained ellipse you can come to a conclusion whether the image is vertical, horizontal or inclined in any other direction.

-If your contour object is broad, the major axis of the ellipse fit will be horizontal.

-If your contour object is thin and tall, the major axis of the ellipse fit will be vertical.

Now if the obtained ellipse fit is neither vertical nor horizontal, you will need to perform an orientation alignment.

Hope it helps!!!!

EDIT

I guess you want to rotate your image. You can use the getRotationMatrix2D() function available in the OpenCV library (snippet from here ):

(x, y) = img.shape[:2]
center = (y / 2, x / 2)

Mat = cv2.getRotationMatrix2D(center, 90, 1.0)
rotate = cv2.warpAffine(img, Mat, (y, x))
cv2.imwrite("rotated.jpg", rotate)
  • 1st parameter : Initially you obtain the center of the image.
  • 2nd parameter : Rotate the image around this center with an angle of your choice.
  • 3rd parameter : This is the scale . It decides how big or small you want your image to be.

Here is your original image:

在此处输入图片说明

This is the rotated image obtained:

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM