简体   繁体   English

实施车牌检测算法

[英]Implementing a License plate detection algorithm

To improve my knowledge of imaging and get some experience working with the topics, I decided to create a license plate recognition algorithm on the Android platform. 为了提高我的成像知识并获得与主题相关的经验,我决定在Android平台上创建车牌识别算法。

The first step is detection, for which I decided to implement a recent paper titled "A Robust and Efficient Approach to License Plate Detection" . 第一步是检测,为此,我决定实施一篇名为“一种可靠且高效的车牌检测方法”的论文 The paper presents their idea very well and uses quite simple techniques to achieve detection. 本文很好地介绍了他们的想法,并使用了非常简单的技术来实现检测。 Besides some details lacking in the paper, I implemented the bilinear downsampling, converting to gray scale, and the edging + adaptive thresholding as described in Section 3A, 3B.1, and 3B.2. 除了本文中缺少的一些细节外,我还实现了双线性下采样,转换为灰度,以及第3A,3B.1和3B.2节中所述的边缘+自适应阈值处理。 Unfortunately, I am not getting the output this paper presents in eg figure 3 and 6. 不幸的是,我没有得到本文在图3和6中呈现的输出。

The image I use for testing is as follows: 我用于测试的图像如下:

彩色图像

The gray scale (and downsampled) version looks fine (see the bottom of this post for the actual implementation), I used a well-known combination of the RGB components to produce it (paper does not mention how, so I took a guess). 灰度(和降采样)版本看起来不错(实际实现请参见这篇文章的底部),我使用了众所周知的RGB组件组合来生成它(论文没有提及如何做,所以我猜测了一下) 。

在此处输入图片说明

Next is the initial edge detection using the Sobel filter outlined. 接下来是使用Sobel滤波器概述的初始边缘检测。 This produces an image similar to the ones presented in figure 6 of the paper. 这样产生的图像类似于本文图6所示。

在此处输入图片说明

And finally, the remove the "weak edges" they apply adaptive thresholding using a 20x20 window. 最后,去除“弱边缘”,它们使用20x20窗口应用自适应阈值。 Here is where things go wrong . 这是哪里出了问题

在此处输入图片说明

As you can see, it does not function properly, even though I am using their stated parameter values. 如您所见,即使我使用它们声明的参数值,它也无法正常运行。 Additionally I have tried: 另外我尝试了:

  • Changing the beta parameter. 更改beta参数。
  • Use a 2d int array instead of Bitmap objects to simplify creating the integral image. 使用2d int数组而不是Bitmap对象可以简化创建积分图像的过程。
  • Try a higher Gamma parameter so the initial edge detection allows more "edges". 尝试使用更高的Gamma参数,以便初始边缘检测允许更多“边缘”。
  • Change the window to eg 10x10. 将窗口更改为例如10x10。

Yet none of the changes made an improvement; 然而,这些变化都没有改善。 it keeps producing images as the one above. 它一直像上面一样产生图像。 My question is: what am I doing different than what is outlined in the paper? 我的问题是:我在做什么与本文概述的有所不同? and how can I get the desired output? 以及如何获得所需的输出?

Code

The (cleaned) code I use: 我使用的(清除)代码:

public int[][] toGrayscale(Bitmap bmpOriginal) {

    int width = bmpOriginal.getWidth();
    int height = bmpOriginal.getHeight();

    // color information
    int A, R, G, B;
    int pixel;

    int[][] greys = new int[width][height];

    // scan through all pixels
    for (int x = 0; x < width; ++x) {
        for (int y = 0; y < height; ++y) {
            // get pixel color
            pixel = bmpOriginal.getPixel(x, y);
            R = Color.red(pixel);
            G = Color.green(pixel);
            B = Color.blue(pixel);
            int gray = (int) (0.2989 * R + 0.5870 * G + 0.1140 * B);
            greys[x][y] = gray;
        }
    }
    return greys;
}

The code for edge detection: 边缘检测的代码:

private int[][] detectEges(int[][] detectionBitmap) {

    int width = detectionBitmap.length;
    int height = detectionBitmap[0].length;
    int[][] edges = new int[width][height];

    // Loop over all pixels in the bitmap
    int c1 = 0;
    int c2 = 0;
    for (int y = 0; y < height; y++) {
        for (int x = 2; x < width -2; x++) {
            // Calculate d0 for each pixel
            int p0 = detectionBitmap[x][y];
            int p1 = detectionBitmap[x-1][y];
            int p2 = detectionBitmap[x+1][y];
            int p3 = detectionBitmap[x-2][y];
            int p4 = detectionBitmap[x+2][y];


            int d0 = Math.abs(p1 + p2 - 2*p0) + Math.abs(p3 + p4 - 2*p0);
            if(d0 >= Gamma) {
                c1++;
                edges[x][y] = Gamma;
            } else {
                c2++;
                edges[x][y] = d0;
            }
        }
    }
    return edges;
}

The code for adaptive thresholding. 自适应阈值处理的代码。 The SAT implementation is taken from here : SAT的实现取自这里

private int[][] AdaptiveThreshold(int[][] detectionBitmap) {

    // Create the integral image
    processSummedAreaTable(detectionBitmap);

    int width = detectionBitmap.length;
    int height = detectionBitmap[0].length;

    int[][] binaryImage = new int[width][height];

    int white = 0;
    int black = 0;
    int h_w = 20; // The window size
    int half = h_w/2;

    // Loop over all pixels in the bitmap
    for (int y = half; y < height - half; y++) {
        for (int x = half; x < width - half; x++) {
            // Calculate d0 for each pixel
            int sum = 0;
            for(int k =  -half; k < half - 1; k++) {
                for (int j = -half; j < half - 1; j++) {
                    sum += detectionBitmap[x + k][y + j];
                }
            }

            if(detectionBitmap[x][y] >= (sum / (h_w * h_w)) * Beta) {
                binaryImage[x][y] = 255;
                white++;
            } else {
                binaryImage[x][y] =  0;
                black++;
            }
        }
    }
    return binaryImage;
}

/**
 * Process given matrix into its summed area table (in-place)
 * O(MN) time, O(1) space
 * @param matrix    source matrix
 */
private void processSummedAreaTable(int[][] matrix) {
    int rowSize = matrix.length;
    int colSize = matrix[0].length;
    for (int i=0; i<rowSize; i++) {
        for (int j=0; j<colSize; j++) {
            matrix[i][j] = getVal(i, j, matrix);
        }
    }
}
/**
 * Helper method for processSummedAreaTable
 * @param row       current row number
 * @param col       current column number
 * @param matrix    source matrix
 * @return      sub-matrix sum
 */
private int getVal (int row, int col, int[][] matrix) {
    int leftSum;                    // sub matrix sum of left matrix
    int topSum;                     // sub matrix sum of top matrix
    int topLeftSum;                 // sub matrix sum of top left matrix
    int curr = matrix[row][col];    // current cell value
    /* top left value is itself */
    if (row == 0 && col == 0) {
        return curr;
    }
    /* top row */
    else if (row == 0) {
        leftSum = matrix[row][col - 1];
        return curr + leftSum;
    }
    /* left-most column */
    if (col == 0) {
        topSum = matrix[row - 1][col];
        return curr + topSum;
    }
    else {
        leftSum = matrix[row][col - 1];
        topSum = matrix[row - 1][col];
        topLeftSum = matrix[row - 1][col - 1]; // overlap between leftSum and topSum
        return curr + leftSum + topSum - topLeftSum;
    }
}

Marvin provides an approach to find text regions. Marvin提供了一种查找文本区域的方法。 Perhaps it can be a start point for you: 也许这可以成为您的起点:

Find Text Regions in Images: http://marvinproject.sourceforge.net/en/examples/findTextRegions.html 在图像中查找文本区域: http : //marvinproject.sourceforge.net/en/examples/findTextRegions.html

This approach was also used in this question: 此问题也使用了这种方法:
How do I separates text region from image in java 如何在Java中将文本区域与图像分开

Using your image I got this output: 使用您的图像,我得到以下输出: 在此处输入图片说明

Source Code: 源代码:

package textRegions;

import static marvin.MarvinPluginCollection.findTextRegions;

import java.awt.Color;
import java.util.List;

import marvin.image.MarvinImage;
import marvin.image.MarvinSegment;
import marvin.io.MarvinImageIO;

public class FindVehiclePlate {

    public FindVehiclePlate() {
        MarvinImage image = MarvinImageIO.loadImage("./res/vehicle.jpg");
        image = findText(image, 30, 20, 100, 170);
        MarvinImageIO.saveImage(image, "./res/vehicle_out.png");
    }

    public MarvinImage findText(MarvinImage image, int maxWhiteSpace, int maxFontLineWidth, int minTextWidth, int grayScaleThreshold){
        List<MarvinSegment> segments = findTextRegions(image, maxWhiteSpace, maxFontLineWidth, minTextWidth, grayScaleThreshold);

        for(MarvinSegment s:segments){
            if(s.height >= 10){
                s.y1-=20;
                s.y2+=20;
                image.drawRect(s.x1, s.y1, s.x2-s.x1, s.y2-s.y1, Color.red);
                image.drawRect(s.x1+1, s.y1+1, (s.x2-s.x1)-2, (s.y2-s.y1)-2, Color.red);
                image.drawRect(s.x1+2, s.y1+2, (s.x2-s.x1)-4, (s.y2-s.y1)-4, Color.red);
            }
        }
        return image;
    }

    public static void main(String[] args) {
        new FindVehiclePlate();
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM