I'm using the following code to detect certain shapes in an image:
import cv2
import numpy as np
img = cv2.imread("006.jpg")
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(grey,127,255,1)
cv2.imshow('img',thresh)
cv2.waitKey(0)
contours, h = cv2.findContours(thresh, 1, cv2.CHAIN_APPROX_SIMPLE)
contours.sort(key = len)
for contour in contours:
approx = cv2.approxPolyDP(contour, 0.01*cv2.arcLength(contour, True), True)
#star - > yellow
if len(approx) == 10:
cv2.drawContours(img, [contour],0, (0,255,255), -1)
#circle -> black
elif len(approx) >= 11:
cv2.drawContours(img, [contour], 0, (0,0,0), -1)
#triangle -> green
elif len(approx) == 3:
cv2.drawContours(img,[contour],0,(0,255,0),-1)
#square -> blue
elif len(approx) == 4:
cv2.drawContours(img, [contour],0, (255,0,0),-1)
#pentagon -> red
elif len(approx) == 5:
cv2.drawContours(img, [contour],0, (0,0,255), -1)
cv2.imshow('img',img)
cv2.waitKey(0)
This code works well for images on my computer, but when i print out the image, take a picture off it and try to run the code on it again (as here: image ) it doesn't work as well as it should.
I already tried using blurs and canny but I'm not able to smoothen my second picture enough.
I hope someone can help!
Probably using fixed threshold value (127 in your case) it is not a good idea in case of photos taken by a camera (although it works in the abstract case, when the shapes are pure color not influenced by a shade). It seems the 127 is too high value for the image you have provided.
Why don't you try with an Otsu method ? Here's an example of how to use with python in opencv. It will make an threshold level invariant to a real environment you get in case of image taken by a camera.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.