简体   繁体   中英

Detecting blobs on image with OpenCV

I want to get some descriptors for each white area on image to filter that areas and operate with them separately. How can I do it?

I have read How to use OpenCV SimpleBlobDetector and http://www.learnopencv.com/blob-detection-using-opencv-python-c/ but still can't get any result with my simple image.

在此处输入图片说明

Here is my code in python

img = cv2.imread("map.jpg", cv2.IMREAD_GRAYSCALE)
params = cv2.SimpleBlobDetector_Params()
params.blobColor = 255
params.filterByColor = True
params.minArea = 16
params.filterByArea = True
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(255 - img)
len(keypoints)
# 0

OpenCV 3.1.0

Image is grayscaled.

UPD: Code updated following comment by @api55

I want to get some descriptors for each white area on image to filter that areas and operate with them separately. How can I do it?

My goal could be reached with sklearn.measure.label . This function returns a numpy array with the same shape and labels for each connected area.

But anyway it is still not clear why SimpleBlobDetector from OpenCV doesn't work.

I was having the same problem. I had to remove the fiterbyarea parameter:

params.filterByArea = False

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM