简体   繁体   English

提取图像中的车牌号

[英]Extract car license plate number in image

I want to find the car plate number to search in a database.我想找到要在数据库中搜索的车牌号。 Since Saudi plates are different, I face this problem由于沙特板块不同,我面临这个问题

在此处输入图片说明

The result of the code代码的结果

在此处输入图片说明

My current approach is to search for the cross in openCV using edge detection.我目前的方法是使用边缘检测在 openCV 中搜索十字。 How can I found the cross and take the below character (using container and edge detection)?我怎样才能找到十字架并取下面的字符(使用容器和边缘检测)?

import numpy as np
import pytesseract
from PIL import Image
import cv2
import imutils
import matplotlib.pyplot as plt
import numpy as np

img = cv2.imread('M4.png')
img = cv2.resize(img, (820,680) )
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert to grey scale
gray =  cv2.blur(gray, (3,3))#Blur to reduce noise
edged = cv2.Canny(gray, 10, 100) #Perform Edge detection
# find contours in the edged image, keep only the largest
# ones, and initialize our screen contour
cnts = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:10]
screenCnt = None

# loop over our contours
for c in cnts:
    # approximate the contour
    peri = cv2.arcLength(c, True)
    approx = cv2.approxPolyDP(c, 0.1 * peri, True)
    # if our approximated contour has four points, then
    # we can assume that we have found our screen
    if len(approx) == 4:
        screenCnt = approx
        break
if screenCnt is None:
    detected = 0
    print "No contour detected"
else:
    detected = 1

if detected == 1:
    cv2.drawContours(img, [screenCnt], -1, (0, 255, 0), 3)
# Masking the part other than the number plate
imgs = img
mask = np.zeros(gray.shape,np.uint8)
new_image = cv2.drawContours(mask,[screenCnt],0,255,-1,)
new_image = cv2.bitwise_and(imgs,imgs,mask=mask)

# Now crop
(x, y) = np.where(mask == 255)
(topx, topy) = (np.min(x), np.min(y))
(bottomx, bottomy) = (np.max(x), np.max(y))
Cropped = gray[topx:bottomx+1, topy:bottomy+1]

#Read the number plate
text = pytesseract.image_to_string(Cropped, config='--psm 11')
print("Detected Number is:",text)
plt.title(text)
plt.subplot(1,4,1),plt.imshow(img,cmap = 'gray')
plt.title('Original'), plt.xticks([]), plt.yticks([])
plt.subplot(1,4,2),plt.imshow(gray,cmap = 'gray')
plt.title('gray'), plt.xticks([]), plt.yticks([])
plt.subplot(1,4,3),plt.imshow(Cropped,cmap = 'gray')
plt.title('Cropped'), plt.xticks([]), plt.yticks([])
plt.subplot(1,4,4),plt.imshow(edged,cmap = 'gray')
plt.title('edged'), plt.xticks([]), plt.yticks([])
plt.show()

#check data base

#recoed the entre

cv2.waitKey(0)
cv2.destroyAllWindows()

Thanks for your help谢谢你的帮助

Here's an approach:这是一种方法:

  • Convert image to grayscale and Gaussian blur将图像转换为灰度和高斯模糊
  • Otsu's threshold to get a binary image获得二值图像的 Otsu 阈值
  • Find contours and sort contours from left-to-right to maintain order查找轮廓并从左到右排序轮廓以保持顺序
  • Iterate through contours and filter for the bottom two rectangles遍历轮廓并过滤底部的两个矩形
  • Extract ROI and OCR提取 ROI 和 OCR

After converting to grayscale and Gaussian blurring, we Otsu's threshold to get a binary image.转换为灰度和高斯模糊后,我们使用大津阈值得到二值图像。 We find contours then sort the contours using imutils.contours.sort_contours() with the left-to-right parameter.我们找到轮廓,然后使用imutils.contours.sort_contours()left-to-right参数对轮廓进行排序。 This step keeps the contours in order.此步骤使轮廓保持有序。 From here we iterate through the contours and perform contour filtering using these three filtering conditions:从这里开始,我们遍历轮廓并使用这三个过滤条件执行轮廓过滤:

  • The contour must be larger than some specified threshold area ( 3000 )轮廓必须大于某个指定的阈值区域 ( 3000 )
  • The width must be larger than the height宽度必须大于高度
  • The center of each ROI must be in the bottom half of the image.每个 ROI 的中心必须位于图像的下半部分。 We find the center of each contour and compare it to where it is located on the image.我们找到每个轮廓的中心并将其与它在图像上的位置进行比较。

If a ROI passes these filtering conditions, we extract the ROI using numpy slicing and then throw it into Pytesseract.如果 ROI 通过这些过滤条件,我们使用 numpy 切片提取 ROI,然后将其放入 Pytesseract。 Here's the detected ROIs that pass the filter highlighted in green这是通过以绿色突出显示的过滤器的检测到的 ROI

在此处输入图片说明

Since we already have the bounding box, we extract each ROI由于我们已经有了边界框,我们提取每个 ROI

在此处输入图片说明 在此处输入图片说明

We throw each individual ROI into Pytesseract one at a time to construct our license plate string.我们一次一个地将每个单独的 ROI 放入 Pytesseract 以构建我们的车牌字符串。 Here's the result这是结果

License plate: 430SRU

Code代码

import cv2
import pytesseract
from imutils import contours

pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"

image = cv2.imread('1.png')
height, width, _ = image.shape
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts, _ = contours.sort_contours(cnts, method="left-to-right")

plate = ""
for c in cnts:
    area = cv2.contourArea(c)
    x,y,w,h = cv2.boundingRect(c)
    center_y = y + h/2
    if area > 3000 and (w > h) and center_y > height/2:
        ROI = image[y:y+h, x:x+w]
        data = pytesseract.image_to_string(ROI, lang='eng', config='--psm 6')
        plate += data

print('License plate:', plate)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM