简体   繁体   中英

How to extract white region in an image

I have a sample image like this

在此处输入图片说明

I'm looking for a way to black out the noise from the image such that I end up with an image that just has black text on white background so that I may send it to tesseract.

I've tried morphing with

kernel = np.ones((4,4),np.uint8)
opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
cv2.imshow("opening", opening)

but it doesn't seem to work.

I've also tried to find contours

img = cv2.cvtColor(rotated, cv2.COLOR_BGR2GRAY)
(cnts, _) = cv2.findContours(img, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:1]
for c in cnts:
    x,y,w,h = cv2.boundingRect(c)
    roi=rotated[y:y+h,x:x+w].copy()
    cv2.imwrite("roi.png", roi)

With the above code, I get the following contours:

在此处输入图片说明

which leads to this image when cropped:

在此处输入图片说明

which is still not good enough. I want black text on white background, so that I can send it to tesseract OCR and have good success rate.

Is there anything else I can try?

Update

Here is an additional similar image. This one is a bit easier because it has a smooth rectangle in it

在此处输入图片说明

The following works for your given example, although it might need tweaking for a wider range of images.

import numpy as np
import cv2

image_src = cv2.imread("input.png")
gray = cv2.cvtColor(image_src, cv2.COLOR_BGR2GRAY)
ret, gray = cv2.threshold(gray, 250,255,0)

image, contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
largest_area = sorted(contours, key=cv2.contourArea)[-1]
mask = np.zeros(image_src.shape, np.uint8)
cv2.drawContours(mask, [largest_area], 0, (255,255,255,255), -1)
dst = cv2.bitwise_and(image_src, mask)
mask = 255 - mask
roi = cv2.add(dst, mask)

roi_gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
ret, gray = cv2.threshold(roi_gray, 250,255,0)
image, contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

max_x = 0
max_y = 0
min_x = image_src.shape[1]
min_y = image_src.shape[0]

for c in contours:
    if 150 < cv2.contourArea(c) < 100000:
        x, y, w, h = cv2.boundingRect(c)
        min_x = min(x, min_x)
        min_y = min(y, min_y)
        max_x = max(x+w, max_x)
        max_y = max(y+h, max_y)

roi = roi[min_y:max_y, min_x:max_x]
cv2.imwrite("roi.png", roi)

Giving you the following type of output images:

在此处输入图片说明

And...

在此处输入图片说明

The code works by first locating the largest contour area. From this a mask is created which is used to first select only the area inside, ie the text. The inverse of the mask is then added to the image to convert the area outside the mask to white.

Lastly contours are found again for this new image. Any contour areas outside a suitable size range are discarded (this is used to ignore any small noise areas), and a bounding rect is found for each. With each of these rectangles, an outer bounding rect is calculated for all of the remaining contours, and a crop is made using these values to give the final image.

Update - To get the remainder of the image, ie with the above area removed, the following could be used:

image_src = cv2.imread("input.png")
gray = cv2.cvtColor(image_src, cv2.COLOR_BGR2GRAY)
ret, gray = cv2.threshold(gray, 10, 255,0)
image, contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
largest_area = sorted(contours, key=cv2.contourArea)[-1]
mask = np.zeros(image_src.shape, np.uint8)
cv2.drawContours(mask, [largest_area], 0, (255,255,255,255), -1)
image_remainder = cv2.bitwise_and(image_src, 255 - mask)

cv2.imwrite("remainder.png", image_remainder)

Basic idea of this answer is to use border around text.

1) Erode horizontally with a very large kernel, say size of 100 px or 8 times size of single expected character, something like that. It should be done row-wise. The extreme ordinate will give y-location of boundaries around text.

2) Process vertically same way to get x-location of boundaries around text. Then use these locations to crop out image you want.

-- One benefit of this method is you will get every sentence/word segmented separately which, I presume, is good for an OCR.

Happy Coding :)

Edited in by Mark Setchell

Here is a demo of 1)

在此处输入图片说明

Here is a demo of 2)

在此处输入图片说明

I get this: Result

Source Code:

if __name__ == '__main__':
  SrcImg = cv2.imread('./Yahi9.png', cv2.CV_LOAD_IMAGE_GRAYSCALE)
  _, BinImg = cv2.threshold(SrcImg, 80, 255, cv2.THRESH_OTSU)

  Contours, Hierarchy = cv2.findContours(image=copy.deepcopy(SrcImg),
                                         mode=cv2.cv.CV_RETR_EXTERNAL,
                                         method=cv2.cv.CV_CHAIN_APPROX_NONE)
  MaxContour, _ = getMaxContour(Contours)
  Canvas = np.ones(SrcImg.shape, np.uint8)
  cv2.drawContours(image=Canvas, contours=[MaxContour], contourIdx=0, color=(255), thickness=-1)
  mask = (Canvas != 255)
  RoiImg = copy.deepcopy(BinImg)
  RoiImg[mask] = 255
  RoiImg = cv2.morphologyEx(src=RoiImg, op=cv2.MORPH_CLOSE, kernel=np.ones((3,3)), iterations=4)
  cv2.imshow('RoiImg', RoiImg)
  cv2.waitKey(0)

Function:

def getMaxContour(contours):
  MaxArea = 0
  Location = 0
  for idx in range(0, len(contours)):
      Area = cv2.contourArea(contours[idx])
      if Area > MaxArea:
          MaxArea = Area
          Location = idx
  MaxContour = np.array(contours[Location])
  return MaxContour, MaxArea

Ehh, it's python code. It only works when the white region is the max contour.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM