简体   繁体   English

OpenCV:对16位灰度图像进行归一化处理会产生较弱的结果

[英]OpenCV: Normalization of 16bit grayscale image gives weak result

I want to stretch contrast in 16 bit grayscale image. 我想在16位灰度图像中拉伸对比度。 but void normalize(InputArray src, OutputArray dst, double alpha=1, double beta=0, int norm_type=NORM_L2, int dtype=-1, InputArray mask=noArray() ) gives me little bit brighter image, but still too dark. void normalize(InputArray src, OutputArray dst, double alpha=1, double beta=0, int norm_type=NORM_L2, int dtype=-1, InputArray mask=noArray() )给我一点点明亮的图像,但仍然太暗。

Documentation: http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#normalize 文档: http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#normalize : http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#normalize

says that alpha is the lower limit and beta upper limit. 表示Alpha是下限,而Beta是上限。 So in case of 16 bit image i would expect 0 and 65535.0 are correct values. 因此,对于16位图像,我希望0和65535.0是正确的值。 I did a research and most answers pointed out that alpha and betha are minimum and maximum in normalized image. 我进行了一项研究,大多数答案都指出,在归一化图像中,alpha和betha是最小值和最大值。

#include "stdafx.h"
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
 {
    cv::Mat image;
    image = cv::imread("darkImage.tif", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_GRAYSCALE);

if (!image.data)                           
    {
        std::cout << "Could not open or find the image" << std::endl;
        return -1;
    }

    cv::namedWindow("Original", CV_WINDOW_NORMAL | CV_WINDOW_KEEPRATIO);
    cv::imshow("Original", image);

    cv::normalize(image, image, 0, 65535.0, cv::NORM_MINMAX, CV_16U);
    cv::namedWindow("Normalize", CV_WINDOW_NORMAL | CV_WINDOW_KEEPRATIO);
    cv::imshow("Normalize", image);
    cv::waitKey();

    return 0;

}

Original and Normalized image show that contrast enhancement is not sufficient. 原始图像和归一化图像显示对比度增强不足。 ImageJ normalization gives me much better result . ImageJ规范化给我更好的结果

Are alpha and beta values appropriate for 16bit image? alpha和beta值是否适合16位图像? I am new in opencv and any help is appresiated. 我是opencv的新手,感谢任何帮助。

I use: opencv3.1, VisualStudio2015, W10, 64bit 我使用:opencv3.1,VisualStudio2015,W10、64位

Yeah, probably histogram equaliztion is the way to go. 是的,直方图均衡可能是解决方法。 EqualizeHist doesn't work for 16-bit. EqualizeHist不适用于16位。 So I would recommend either 所以我建议

image.convertTo(image,CV_8U,1./256.);

or 要么

image.convertTo(image,CV_32F);

followed by 其次是

equalizeHist(image,imageEq);

The 8-bit option is tried and true, but might lose information during truncation. 8位选项已尝试并为true,但在截断期间可能会丢失信息。 I haven't tried this for float, but I suspect it ends up truncating/binning internally anyway, which defeats the purpose of float. 我没有尝试过使用float,但是我怀疑它最终还是在内部被截断/合并,这违背了float的目的。

Alternatively, if you wanted to do it properly and weren't concerned with run-time/dev-time, you could implement a 16-bit histogram followed by a 16bit-to-8bit lookup table, following the idea behind histogram equalization/CLAHE. 另外,如果您想正确执行操作而不关心运行时/开发时间,则可以遵循直方图均衡/ CLAHE的思想,实现一个16位直方图,然后执行一个16位至8位查找表。 。 (Make the cumulative distribution function and apply this 65K vector directly as a lookup table to the image to make the result uniformly distributed.) (进行累积分布函数,并将此65K向量直接作为查找表应用于图像,以使结果均匀分布。)

Or if you want to do your own version of cv::normalize() to give centered mean and reasonable standard deviation, you could do something like this: 或者,如果您想使用自己的cv :: normalize()版本来给出居中的平均值和合理的标准偏差,则可以执行以下操作:

Scalar imMean, imStd;
meanStdDev(image, imMean, imStd);
double a = (1<<16)*(0.25/imStd.val[0]);   // give equalized image a stdDev of 0.25
double b = (1<<16)*0.5 - a*imMean.val[0]; // give equalized image a mean of 0.5
imageEq = a*image+b;

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM