简体   繁体   中英

How to improve accuracy of CV2's HoughCircles

I've been dabbling into OpenCV for the sake of learning something new and one of the projects I proposed myself to do (as suggested by a friend) was determining the diameter of the circles generated by the pills in an antibiogram sensitivity test. An example of one can be found here. While searching about ways to do this, I found about the existence of CV2's HoughCircles, so I started a long journey of reading about it while also just trial and error-ing my way to some sort of decent result. By now, I'd like to say I have a "decent" understanding of how the function works, however, I definitely don't know how to make the most of it.

Enter the code I have so far:

import numpy
import cv2

image = cv2.imread("antibiograma.png")
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.blur(gray, (20,30), cv2.BORDER_DEFAULT)
height, width = blur.shape[:2]

minR = round(width/65)
maxR = round(width/11)
minDis = round(width/7)

circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.3, minDis, param1=23, param2=72, minRadius=minR, maxRadius=maxR)

if circles is not None:
    circles = numpy.round(circles[0, :]).astype("int")
    for (x, y, r) in circles:
        cv2.circle(output, (x, y), r, (0, 255, 0), 2)
        cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("result", numpy.hstack([image, output]))
cv2.waitKey()

Most of it was copied from some of the resources I looked up while trying to wrap my head around how it worked, with most parameters modified to better adjust to the images I'm working with.

The results are... not great. It seems to mark the approximate position of the circles, but not very accurately . Most of the circles have some nasty offset, are a bit bigger than they should be and there's one in particular (the leftmost one) that is almost completely wrong. Not to mention that some of these numbers (particularly dp and param2) might as well be magic numbers , something I'm not too happy with and I'm pretty sure will be the demise of the code I wrote when I decide to apply it to a different antibiogram sensitivy test (which, I must add, I plan to do).

Edit : I also decided to play around a bit with the image and applied the following transformation:

im4 = 255.0 * (im/255.0)**2

Took the figure generated by MatPlotLib and ran that through HoughCircles (with different parameters):

circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 2.3, minDis, param1=1250, param2=50, maxRadius=maxR)

Which gave me this result . A lot more accurate, however, also suffering of the magic numbers issue on top of ignoring the leftmost pill.

So, it boils down to 2 things:

  1. How can I better improve the accuracy of the function?
  2. How can I determine a set of parameters that will work with more than 1 image?

I apologize if these seem like dumb and basic, I'm rather new to this whole thing, so I'd appreciate any help.

Cheers!

That test example actually looks ok for an OpenCV out-of-the-box algorithm result. There's always room for improvement.

Here's what I was able to do:

import numpy
import cv2

image = cv2.imread("bact.png")
output = image.copy()

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

hist = cv2.equalizeHist(gray)

blur = cv2.GaussianBlur(hist, (31,31), cv2.BORDER_DEFAULT)
height, width = blur.shape[:2]

minR = round(width/65)
maxR = round(width/11)
minDis = round(width/7)

circles = cv2.HoughCircles(blur, cv2.HOUGH_GRADIENT, 1, minDis, param1=14, param2=25, minRadius=minR, maxRadius=maxR)

if circles is not None:
    circles = numpy.round(circles[0, :]).astype("int")
    for (x, y, r) in circles:
        cv2.circle(output, (x, y), r, (0, 255, 0), 2)
        cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("result", numpy.hstack([image, output]))
cv2.waitKey()

结果就是这样

A couple of suggestions I can think of to help improve or find better params:

  1. Pass in the blurred image to the HoughCircles function. Currently, you're passing in the grayscale version of the image, not the blurred one.

  2. Blur size and type. Blurrier images can help downstream algorithms . For the kernel, the size also tends to be the same shape eg (20,20), and using (20,30) might give different results.

  3. Consider contrast correction like an equalizeHist or CLAHE - it can help with edge-detection for images where you have low contrast or faded edges.

  4. Magic numbers. According to the docs , param2 can be made larger to remove false positives, so experiment with that. As for param1 and dp, you might have to permute some combinations and see what works best. And if you're getting into the world of comparing multiple image sets with different lighting conditions, you might have to consider normalising the images first. Magic numbers seem to be part and parcel with out of the box algorithms, so the best way to find what works is to permute the inputs and output the images all at once to see what works, then fine tune.

Best of luck in your biology-related OpenCV journey:)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM