简体   繁体   中英

Compare images on python and allow pixel shift difference

After implementing a canny edge detector I have to compare the results to the ones detected by a human, and calculate precision and recall (by comparing each pixel). Both images are binary. The thing is, I have to allow a pixel shift of size one between the images. That means that if I have a value of 1 in E(i,j) and the reference image has it for example at GT(i-1,j), there would still be a match. This shift is individual to each pixel and could be at any direction. For the implementation I must use either a mask or the function cv2.dilate(), but since by using dilate we are turning on more pixels, each of those could be matched with one in the reference image, therefore creating multiple matches for each original pixel, which is not allowed. Does anyone have an idea how to allow the pixel shift without creating multiple matches per pixel?

A possible brute-force solution might look something like this:

num_feature_pixels = 0
matches = 0
for i in num_rows:
    for j in num_cols:
        if GT[i, j] == 1:
            num_feature_pixels += 1
                for k in 9:
                    row = k / 3
                    col = k % 3
                    if GT[i, j] == E[i - 1 + row, j - 1 + col]:
                        matches += 1
                        break
# Do something with the matches to total positives ratio

Chances of hitting a few false positives in this method are high though since there is no way to determine if the pixel being matched to in 3x3 window is in fact the ground truth for the one in the E image.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM