I work with ubuntu and the Code is in C++ with opencv. I test a little bit to detect some parts of pictures. It works very well, but now I want to find the position in my big picture. Here is the code:
#include...
using namespace cv;
int main(int argc, char** argv) {
Mat img = imread("/home/ubuntu/workspace2/sift/src/inputklein.jpg",CV_LOAD_IMAGE_GRAYSCALE);
while(1){
Mat img2 =imread("/home/ubuntu/workspace2/sift/src/input.jpeg",CV_LOAD_IMAGE_GRAYSCALE); //frame
//initialize SIFT
// Create smart pointer for SIFT feature detector.
SIFT sift;
vector<KeyPoint> key_points;
vector<KeyPoint> key_points2;
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 100;
SurfFeatureDetector detector( minHessian );
detector.detect( img, key_points );
detector.detect( img2, key_points2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors1;
Mat descriptors2;
extractor.compute( img, key_points, descriptors1 );
extractor.compute( img2, key_points2, descriptors2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors1, descriptors2, matches );
double max_dist = 20; double min_dist = 10;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//std::cout<<"Max dist :"<< max_dist ;
//std::cout<<"Min dist :"<< min_dist ;
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
//-- small)
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img, key_points, img2, key_points2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//std::cout<<key_points[1].pt.x<<"\n";
//std::cout<<key_points2[1].pt.y<<"\n";
//-- 3. Apply the classifier to the frame
cv::imshow( "test", img_matches ); //img_matches
waitKey(30);
}
return 0;
}
Okay, but how can I get the position ore the place where the most keypoints are. Can someone give me a tip how I must understand it? I saw that I can use something like this: "key_points[1].pt.x" or with y, but do I not must check it to every x,y-place? next would be: good_matches[1].queryIdx but here is it the same question. How can I find where it is?
A big question is for me, why is there only a loop over row? Should it not be over row and cols? In my destination it should works like in an array(x,y) and I check every position if it is the same...(have problems with no easy datatypes...)
Where I can find/or how the place of the code of drawMatches (for example). Normal way I would try "Open Declaration" (using Eclipse,C++), but I only see the header and not the real function. I need the code and hope I can change all without opencv, or maybe that I can do the loops... and so I must understand how I can read and use the vector DMatch...
Thx for your help. Best Regards,
If I undertand you well you would to find keypoints positions and classifier them. To know the keypoints position you need to do a bucle crossing the keypoints vectors and save the position in a matrix.
Then when you can compare the position of this keypoints with a images areas (whatever you want) and classify them depending on the area in image they are.
Mat pointsInFirstAreaRight, pointInFirstAreaLeft;
for (int i = 0; i < 2; i++){
for(vector<DMatch>::const_iterator it = keypoints[i].begin(); it!= keypoints[i].end(); ++it){
// Get the position of keypoints
float x = [it->queryIdx].pt.x;
float y = [it->queryIdx].pt.y;
//If the point detected are in the 20 first pixels
if ( (x < 20) && (y > 20) )
{
//Classify this point in this area
if ( i == 0)
pointsInFirstAreaRight.push_back(Point2f(x, y));
else if (i == 1)
pointsInFistAreaLeft.push_back(Point2f(x, y));
}
}
}
About the sift function, it works doing the detect and extractor at the same time. I use it in my code like this:
> SIFT sift;
>
> /* get keypoints on the images */
> sift(imagenI, Mat(), keypoints[0], descriptors[0]);
> sift(imagenD, Mat(), keypoints[1], descriptors[1]);
and then I detect keypoints a extrac the descriptors of points.
Then I only have to do the matching.
I hope to help you.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.