For template matching Im using TM_CCOEFF_NORMED
in java and until now I always had pretty accurate and meaningful results but with this specific template image I'm having wrong match scores. The template image doesnt belong to input(source) image on purpose, so Im expecting to have bad matching scores, the worst, but it gives me the best; "1.0" and always finds the template image at the same place top left corner.
Here is my template image:
Example output with a red cloud input image: (the green highlight is the best match according to the program)
Example output with a dark city input image:
MinMaxLocResult mmr = Core.minMaxLoc(result);
matchScore = mmr.maxVal;
matchScore
variable is always 1.0 for the specific light green template image although the red and the dark images are not similar to green at all. I would be glad for your suggestions and comments for improvement because TM_CCOEFF_NORMED
gives always the first searched square/rectangle as best match with score 1.0, this cant be correct, on the other hand I also tried TM_CCORR_NORMED
and TM_SQDIFF_NORMED
they gave different match scores, this is promising but still TM_CCORR_NORMED
gave good matching score which is still unexpected from my side. I would be glad if someone can explain me the difference between the matching methods or give a link to already an existing page where these methods are being discussed, on opencv docs/tutorials there are only the formulas available but no detailed explanation. In the end I would like to know when to use which match method for what kind of image to get the best results.
Here is some more code:
Mat img = Highgui.imread(inFile);
Mat templ = Highgui.imread(templateFile);
// / Create the result matrix
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
// / Do the Matching
Imgproc.matchTemplate(img, templ, result, match_method);
// / Localizing the best match with minMaxLoc
MinMaxLocResult mmr = Core.minMaxLoc(result);
It doesn't work because when pictures are converted in grayscale, they might look similar.
cv::minMaxLoc doesn't work for this kind of situations.
You should use something else, such as Feature extractor or edge detector and compare it using a metric such as Mahalanobis distance
For SQDIFF and SQDIFF_NORMED the best matches are lower values and for rest of the methods the higher is the best match.
Suggestion : normalize the result image before finding for max or min locations
Hope this helps.
I have the same problem. It seems that the template could't be pure color(all pixels with same value). If the template with same value, the score of result must to be 1( TM_CCOEFF_NORMED
). Maybe you could change the method of template matching to TM_CcorrNormed
MinMaxLocResult mmr = Core.minMaxLoc(result);
This method will give you the best match in the result, no matter what it is correct.So in my experience, I used the two loop for finding the best match in result.
int threshold = 0.95; // for TM_CCOEFF_NORMED and TM_CCORR_NORMED
List<Rect> recognizedRects;
List<double> recognizedScores;
for(int i=0;i<result.height;i++) {
for(int j=0;j<result.width;j++) {
if(result.data[i,j,0] > threshold) {
recognizedRects.add(new Rectangle(j,i,template.width,template.height);
recognizedScores.add(result.data[i,j,0])
}
}
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.