简体   繁体   中英

Python + OpenCV + Pytesseract suggestion

I'm trying to OCR this image, which vary (0-4 / 4): 在此处输入图像描述

I've been trying to use Pytesseract, but I'm not getting correct result.

This is what I have so far:

screen_crop = cv2.imread(screen)
screen_gray = cv2.cvtColor(screen_crop, cv2.COLOR_BGR2GRAY)
screen_thresh = cv2.threshold(screen_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
screen_noise = cv2.medianBlur(screen_thresh, 1)
cv2.imshow('img', screen_noise)
ocr = pytesseract.image_to_string(screen_noise)
print(ocr)
cv2.waitKey(0)

This is the result after processing with OpenCV: 在此处输入图像描述

OCR is returning "re", "res"...

Suggestions (doesn't need to be pytesseract)? Thanks!

The problem is that Pytesseract has more accuracy when the words are black and the background is white. Therefore, you shoud use BINARY_INV threshold type instead of BINARY.
Full code:

<!-- language: python -->
import cv2
import pytesseract

pytesseract.pytesseract.tesseract_cmd = 'C:/Users/stevi/AppData/Local/Tesseract-OCR/tesseract.exe'

if __name__ == '__main__':
    screen_crop = cv2.imread('img.png')
    screen_gray = cv2.cvtColor(screen_crop, cv2.COLOR_BGR2GRAY)

    screen_thresh = cv2.threshold(screen_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
    cv2.namedWindow('BINARY', cv2.WINDOW_NORMAL)
    cv2.imshow('BINARY', screen_thresh)

    screen_thresh = cv2.threshold(screen_gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
    cv2.namedWindow('BINARY_INV', cv2.WINDOW_NORMAL)
    cv2.imshow('BINARY_INV', screen_thresh)

    screen_noise = cv2.medianBlur(screen_thresh, 1)
    ocr = pytesseract.image_to_string(screen_noise)
    print(ocr)

    cv2.waitKey(0)
    cv2.destroyAllWindows()

Result:

在此处输入图像描述

I've been getting some good OCR results using keras-ocr instead of pytesseract. Here is a link to the colab notebook I have used for testing: https://colab.research.google.com/drive/1ccohrWn98EF4VdAtwl-shs4S5RxDu0Ew

import matplotlib.pyplot as plt
import keras_ocr

# keras-ocr will automatically download pretrained
# weights for the detector and recognizer.
pipeline = keras_ocr.pipeline.Pipeline()

def get_predictions(images, keywords=None, plot=False):
    images = [keras_ocr.tools.read(url) for url in images]
    prediction_groups = pipeline.recognize(images)
    words = [[prediction[0] for prediction in image
              if prediction[0] in (keywords or [])
              or keywords == None]
             for image in prediction_groups]
    if plot:
        # Plot the predictions
        fig, axs = plt.subplots(nrows=len(images), figsize=(20, 20))
        for ax, image, predictions in zip(axs, images, prediction_groups):
            keras_ocr.tools.drawAnnotations(image=image,
                                            predictions=predictions,
                                            ax=ax)
    return words

Input:

search_images = [
    'https://i.stack.imgur.com/ybpke.png',
    'https://cdn1.egglandsbest.com/assets/images/products/_productFeatureMobi/shell_classic-12over@2x.jpg',
    'https://egglandsbest.coyne-digital.com/wp-content/uploads/2014/08/classic-eggs-MTB.png',
    'https://www.utahsown.org/wp-content/uploads/2017/05/egglands_best_eggs_large_18ct_foam_MT.jpg',
    'https://egglandsbest.coyne-digital.com/wp-content/uploads/2014/08/egglands_best_cage-free_eggs_large_12ct_plastic_MT.jpg',
    'https://cdn1.egglandsbest.com/assets/images/products/_productFeatureMobi/shell_classic-24over@2x.jpg',
]

search_keywords = [
    'egglands',
    'best',
    'extra',
    'large',
    'cage',
    'free',
    'vegetarian',
    '24',
    '12',
    '18',
    '014'
]



predicted_words = get_predictions(search_images)

print(predicted_words)

Output:

[['014'], ['your', 'fresh', 'farm', 'nowi', 'for', 'diet', 'nutritious', 'alits', 'egglands', 'eb', 'best', 'excellent', 'source', 'ofe', 'brandspark', 'vitamins', 'ppro', 'most', 'b5', 'egg', 'b12', 'superior', 'tasting', 'b2', 'americas', 'd', 'e', 'trusted', 'large', 'plus125mg', 'omega', '3', 'grade', 'a', 'eggs', '12', 'saturated', 'fat', '250', 'less', 'american', 'by', 'regular', 'eggs', 'than', 'shoppers', 'fed', 'hens', 'vegetarian', 'per', 'egg', 'lb', 'oz', 'boo', 'colestero', 'coten', 'net', 'wt', '24', 'oz1', 'b', 'facts', 'fon', 'ssee', 'uirmon', 's', 'n', ''], ['farm', 'fresh', 'stays', 'nowi', 'egglands', 'longer', 'fresher', 'best', 'lles', 'vitatnins', 'd', 'biz', 'e', 'zeggse', 'b', 'gradealarge', 'amlne', 'hs', 'ule', 'oe', 'raing', 'doe', 'taltes', 'ce'], ['stays', 'nowi', 'longer', 'eb', 'fresher', 'farm', 'fresh', 'excellent', 'source', 'of', 'eggiands', 'vitamins', 'd', 'brandseer', 'b12', 'e', 'most', 'trusted', 'good', 'best', 'source', 'of', 'soerens', 'vitamins', 'b2', 'b5', 'plusllsmg', 'omega', '3', 'anericas', 'superior', 'tasting', 'egs', '250', 'less', 'saturated', 'fat', '18', 'eggssa', 'large', 'gradea', 'than', 'regular', 'eggs', 'peregg', 'lleg', 'ensizels', 'dibs', 'asia', 'cottn', 'vegetarian', 'fed', 'hens'], ['farm', 'fresh', 'stays', 'nowa', 'le', 'egglands', 'longer', 'free', 'eb', 'fresher', 'best', 'd', 'cage', 'pro', 'excellent', 'source', 'of', 'vitamins', 'd', 'b12', 'e', 'most', 'good', 'source', 'of', 'trusted', 'vitamins', 'b2', 'b5', 'vecetarian', 'plusil', 'fed', 'smess', 'hens', 'omega', '3', '259', '12', 'eggs', 'saturated', 'grade', 'fat', 'ag', 'large', 'brown', 'than', 'regular', 'eggs', 'etranso'], ['your', 'nowhi', 'for', 'diet', 'nutritious', 'eb', 'fresh', 'farm', '0', 'r', 'egglands', 'excellent', 'source', 'of', 'vitamins', 'best', 'b2', 'b12', 'b5', 'd', 'e', 'tasting', 'egg', 'plusi25mg', 'americas', 'superior', 'omega', '3', '250', 'saturated', 'fat', 'less', 'large', 'eggs', 'than', 'regular', 'a', 'grade', 'egg', 'per', 'wuamon', 'icts', 'fon', 'chclesten', 'content', 'sel', '24', 'eggs', 'fed', 'vegetarian', 'hens', 'usda', 'keep', 'refrigerated', 'bandsparl', 'a', 'or', 'below', '45f', 'at', 'most', 'gde', 'trusted', 'wt', '15', 'oz', '3', 'lbsi', '1301', 'net', 'american', 'shofters', 'atons', 'torc', 's']]

You can specify lists of urls to perform ocr on and (optional) lists of words to find in those images. It will return a list of lists of words found in each image. You can also visualize the output and see annotated bounding boxes for each detection.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM