![](/img/trans.png)
[英]How should I properly extract and parse subject data from a webpage using python and BS4?
[英]How should I properly extract the digital from 7 segment display in python
我正在進行一個關於從 7 段顯示器中提取數字的項目,我正在遵循本指南: https://pyimagesearch.com/2017/02/13/recognizing-digits-with-opencv-and-python/
首先,我已經成功提取了 LED 顯示屏的 ROI,但在生成灰黑色圖像以使用 `cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) 查找數字時遇到了一些困難
我應該怎么做才能在陰影下生成灰度圖像?
原始照片:
提取的黑白照片:
代碼:
img_name = 'test2.jpeg'
image = cv2.imread(img_name)
image = imutils.resize(image, height=1000)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 50, 200, 255)
#cv2.imshow("test", edged)
#cv2.waitKey(0)
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
# loop over the contours
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
# if the contour has four vertices, then we have found
# the thermostat display
if len(approx) == 4:
displayCnt = approx
break
warped = four_point_transform(gray, displayCnt.reshape(4, 2))
output = four_point_transform(image, displayCnt.reshape(4, 2))
thresh = cv2.threshold(warped, 222, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
cv2.imwrite("black.png", thresh)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.