简体   繁体   中英

Normalization in Harris Corner Detector in Opencv C++?

I was reading the documentation on finding corners in an image using the Harris Corner detector. I couldn't understand why they normalized the image after applying the Harris Corner function. I am a bit confused about the idea of normalization. Can someone explain me why do we normalize an image? Also, what does convertscaleabs() do. I am still a starter in opencv and so its kind of hard to understand from the documentation.

cornerHarris( src_gray, dst, blockSize, apertureSize, k, BORDER_DEFAULT );

  /// Normalizing
  normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() ); //(I couldnot understand this line)
  convertScaleAbs( dst_norm, dst_norm_scaled ); // ???

Thanks

I recommend you get familiar with image processing fundamentals before diving into OpenCV. Having a solid notion of basic concepts like contrast, binarization, clipping, threshold, slice, histogram, etc. (just to mention a few) will make your journey through OpenCV easier.

That said, Wikipedia defines it like this: "Normalization is a process that changes the range of pixel intensity values... Normalization is sometimes called contrast stretching or histogram stretching".

To illustrate this definition I will try to give you a simple example:

Imagine you have an 8 bit gray scale image, possible pixel intensity values go from 0 to 255. If the image has "low contrast", its histogram will show a concentration of pixels around a certain value like you can see in the following picture:

直方图

Now what normalization does is to "take" this histogram and stretch it. It applies an intensity transformation map that correlates the original minimum and maximum intensity values with a new pair of minimum and maximum values within the full range of intensities (0 to 255).

As for convertScaleAbs() the documentation says that it scales, calculates absolute values, and converts the result to 8-bit . Not much more to say about it.

Full prototype is void convertScaleAbs(InputArray src, OutputArray dst, double alpha=1, double beta=0) , so used like that it will just calculate the absolute value of each element in the matrix and convert it to a valid 8-bit unsigned value.

You're probably looking at the code of the OpenCV tutorials .

By normalizing the corner response (that are in some unknown interval) in the interval [0, 255] with

normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );

it's easier to select a threshold (since you now know the the threshold will also be in the interval [0, 255]) to keep only strongest responses. The range [0,255] is useful when, such in this case, you get the threshold value through a slider, which can have only integer values.

convertScaleAbs(dst_norm, dst_norm_scaled);

is needed only to convert dst_norm to a Mat of type CV_8U , so it can be shown correctly by imshow .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM