简体   繁体   中英

Python OpenCV - cv.inRange() “sensitivity”?

img = cv2.imread('/home/user/Documents/workspace/ImageProcessing/img.JPG');
image = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

#red, blue, yellow, and gray
boundaries = [
([17, 15, 100], [50, 56, 200]),
([86, 31, 4], [220, 88, 50]),
([25, 146, 190], [62, 174, 250]),
([103, 86, 65], [145, 133, 128])]


for i, (lower, upper) in enumerate(boundaries):

    lower = np.array(lower, dtype="uint8")
    upper = np.array(upper, dtype="uint8")

    mask = cv2.inRange(image, lower, upper)
    output = cv2.bitwise_and(image, image, mask=mask)

    cv2.imwrite(str(i) + 'image.jpg', output)

I am trying to isolate the colors red, blue, yellow and gray from an image (seperately). It is working so far, however the "sensitivity" is way to low. The algorithm is missing some smaller color spots. Is there a way to calibrate this? Thanks!

edit: input image 输入

output
输出1 输出2 输出3 输出4

inRange function does not have a built-in sensitivity. It only compares the values. inRange(x,10,20) will only give you {10,11,...,20}.

One way to overcome this is to introduce your own sensitivity measure.

s = 5 # for example sensitivity=5/256 color values in range [0,255]

for i, (lower, upper) in enumerate(boundaries):

    lower = np.array([color-s if color-s>-1 else 0 for color in lower], dtype="uint8")
    upper = np.array([color+s if color+s<256 else 255 for color in upper], dtype="uint8")

    mask = cv2.inRange(image, lower, upper)
    output = cv2.bitwise_and(image, image, mask=mask)

    cv2.imwrite(str(i) + 'image.jpg', output)

Or you can smooth the image beforehand to get rid of such noisy pixels. That will make the pixel values closer to each other, so that the ones out of the boundary might get values closer to the range.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM