简体   繁体   中英

How can I train my Python based OCR with Tesseract to train with different National Identity Cards?

I am working with python to make an OCR system that reads from the ID Cards and give the exact results from the image but it is not giving me the righteous answers as there are so many wrong characters that the tesseract reads. How can I train tesseract in a way that it reads the ID card perfectly and gives us the right and exact details, furthermore how can I get myself to the .tiff file and to make tesseract work for my project.

Steps to improve Pytesseract recognition:

  1. Clean your image arrays so there is only text(font generated, not handwritten). The edges of letters should be without distortion. Apply threshold (try different values) . Also apply some smoothing filters. I also recommend to use Morpholofical opening/closing - but thats only a bonus. This is exaggerated example of what should enter pytesseract recognition in form of array: https://i.ytimg.com/vi/1ns8tGgdpLY/maxresdefault.jpg

  2. Resize the image with text you want to recognize to higher resolution

  3. Pytesseract should generally recognize letters of any kind, but by installing font in which the text is written, you are superbly increasing accuracy.

How to install new fonts into pytesseract:

  1. Get your desired font in TIFF format

  2. Upload it to http://trainyourtesseract.com/ and receive trained data into your email (EDIT: This site doesnt exist anymore. At this moment you have to find alternative or train font yourself)

  3. add the trained data file (*.traineddata) to this folder C:\\Program Files (x86)\\Tesseract-OCR\\tessdata

  4. add this string command to pytesseract reconition function:

  • lets say you have 2 trained fonts: font1.traineddata and font2.traineddata

  • To use both, use this command

    txt = pytesseract.image_to_string(img, lang= 'font1+font2' )

Here is a code to test your recognition on web images:

import cv2
import pytesseract
import cv2
import numpy as np
import urllib
import requests
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract'
TESSDATA_PREFIX = 'C:/Program Files (x86)/Tesseract-OCR'
from PIL import Image

def url_to_image(url):
    resp = urllib.request.urlopen(url)
    image = np.asarray(bytearray(resp.read()), dtype="uint8")
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)
    return image

url='http://jeroen.github.io/images/testocr.png'


img = url_to_image(url)


#img = cv2.GaussianBlur(img,(5,5),0)
img = cv2.medianBlur(img,5) 
retval, img = cv2.threshold(img,150,255, cv2.THRESH_BINARY)
txt = pytesseract.image_to_string(img, lang='eng')
print('recognition:', txt)
>>> txt
'This ts a lot of 12 point text to test the\nocr code and see if it works on all types\nof file format\n\nThe quick brown dog jumped over the\nlazy fox The quick brown dog jumped\nover the lazy fox The quick brown dog\njumped over the lazy fox The quick\nbrown dog jumped over the lazy fox'

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM