简体   繁体   English

Python、OpenCV:在不溢出 UINT8 数组的情况下增加图像亮度

[英]Python, OpenCV: Increasing image brightness without overflowing UINT8 array

I am trying to increase brightness of a grayscale image.我正在尝试增加灰度图像的亮度。 cv2.imread() returns a numpy array. cv2.imread()返回一个 numpy 数组。 I am adding integer value to every element of the array.我正在向数组的每个元素添加整数值。 Theoretically, this would increase each of them.从理论上讲,这会增加它们中的每一个。 After that I would be able to put upper threshold of 255 and get the image with the higher brightness.之后,我将能够设置 255 的上限阈值并获得更高亮度的图像。

Here is the code:这是代码:

grey = cv2.imread(path+file,0)

print type(grey)

print grey[0]

new = grey + value

print new[0]

res = np.hstack((grey, new))

cv2.imshow('image', res)
cv2.waitKey(0)
cv2.destroyAllWindows()

However, numpy addition apparently does something like that:但是, numpy 添加显然是这样做的:

new_array = old_array % 256

Every pixel intensity value higher than 255 becomes a remainder of dividing by 256.每个高于 255 的像素强度值都成为除以 256 的余数。

As a result, I am getting dark instead of completely white.结果,我变黑而不是完全变白。

Here is the output:这是输出:

<type 'numpy.ndarray'>
[115 114 121 ..., 170 169 167]
[215 214 221 ...,  14  13  11]

And here is the image:这是图像:

在此处输入图片说明

How can I switch off this remainder mechanism?我怎样才能关闭这个余数机制? Is there any better way to increase brightness in OpenCV?有没有更好的方法来增加 OpenCV 的亮度?

One idea would be to check before adding value whether the addition would result in an overflow by checking the difference between 255 and the current pixel value and checking if it's within value .一种想法是在添加value之前通过检查255和当前像素值之间的差异并检查它是否在value内来检查添加是否会导致溢出。 If it does, we won't add value , we would directly set those at 255 , otherwise we would do the addition.如果是,我们不会添加value ,我们会直接将它们设置为255 ,否则我们将进行添加。 Now, this decision making could be eased up with a mask creation and would be -现在,可以通过创建掩码来简化此决策,并且 -

mask = (255 - grey) < value

Then, feed this mask/boolean array to np.where to let it choose between 255 and grey+value based on the mask.然后,将此掩码/布尔数组提供给np.where ,让它根据掩码在255grey+value之间进行选择。

Thus, finally we would have the implementation as -因此,最后我们将实现为 -

grey_new = np.where((255 - grey) < value,255,grey+value)

Sample run样品运行

Let's use a small representative example to demonstrate the steps.我们用一个小的有代表性的例子来演示这些步骤。

In [340]: grey
Out[340]: 
array([[125, 212, 104, 180, 244],
       [105,  26, 132, 145, 157],
       [126, 230, 225, 204,  91],
       [226, 181,  43, 122, 125]], dtype=uint8)

In [341]: value = 100

In [342]: grey + 100 # Bad results (e.g. look at (0,1))
Out[342]: 
array([[225,  56, 204,  24,  88],
       [205, 126, 232, 245,   1],
       [226,  74,  69,  48, 191],
       [ 70,  25, 143, 222, 225]], dtype=uint8)

In [343]: np.where((255 - grey) < 100,255,grey+value) # Expected results
Out[343]: 
array([[225, 255, 204, 255, 255],
       [205, 126, 232, 245, 255],
       [226, 255, 255, 255, 191],
       [255, 255, 143, 222, 225]], dtype=uint8)

Testing on sample image对示例图像进行测试

Using the sample image posted in the question to give us arr and using value as 50 , we would have -使用问题中发布的示例图像为我们提供arr并使用value作为50 ,我们将 -

在此处输入图片说明

Here is another alternative:这是另一种选择:

# convert data type
gray = gray.astype('float32')

# shift pixel intensity by a constant
intensity_shift = 50
gray += intensity_shift

# another option is to use a factor value > 1:
# gray *= factor_intensity

# clip pixel intensity to be in range [0, 255]
gray = np.clip(gray, 0, 255)

# change type back to 'uint8'
gray = gray.astype('uint8)

Briefly, you should add 50 to each value, find maxBrightness , then thisPixel = int(255 * thisPixel / maxBrightness)简而言之,您应该为每个值添加 50,找到maxBrightness ,然后thisPixel = int(255 * thisPixel / maxBrightness)

You have to run a check for an overflow for each pixel.您必须检查每个像素是否溢出。 The method suggested by Divakar is straightforward and fast. Divakar 建议的方法简单快捷。 You actually might want to increment (by 50 in your case) each value and then normalize it to 255. This would preserve details in bright areas of your image.您实际上可能希望增加(在您的情况下为 50)每个值,然后将其标准化为 255。这将保留图像明亮区域的细节。

Use OpenCV's functions.使用 OpenCV 的功能。 They implement "saturating" math.他们实施“饱和”数学。

new = cv.add(grey, value)

Documentation for cv.add cv.add文档

When you only write new = grey + value , that isn't OpenCV doing the work, that is numpy doing the work.当你只写new = grey + value ,那不是 OpenCV 做的工作,那是 numpy 做的工作。 And numpy does nothing special. numpy 没有什么特别的。 Wrap-around for integers is standard behavior.整数环绕是标准行为。

An alternate approach that worked efficiently for me is to "blend in" a white image to the original image using the blend function in the PIL>Image library.对我来说有效的另一种方法是使用 PIL>Image 库中的混合函数将白色图像“混合”到原始图像中。

from PIL import Image
correctionVal = 0.05 # fraction of white to add to the main image
img_file = Image.open(location_filename)
img_file_white = Image.new("RGB", (width, height), "white")
img_blended = Image.blend(img_file, img_file_white, correctionVal)

img_blended = img_file * (1 - correctionVal) + img_file_white * correctionVal

Hence, if correctionVal = 0, we get the original image, and if correctionVal = 1, we get pure white.因此,如果 CorrectVal = 0,我们得到原始图像,如果 CorrectVal = 1,我们得到纯白色。

This function self-corrects for RGB values exceeding 255.此功能对超过 255 的 RGB 值进行自我校正。

Blending in black (RGB 0, 0, 0) reduces brightness.混合黑色 (RGB 0, 0, 0) 会降低亮度。

I ran into a similar issue, but instead of addition, it was scaling image pixels in non-uniform manner.我遇到了类似的问题,但不是加法,而是以非均匀方式缩放图像像素。

The 1-D version of this:这个的一维版本:

a=np.array([100,200,250,252,255],dtype=np.uint8)
scaling=array([ 1.1,  1.2,  1.4,  1.2,  1.1])
result=np.uint8(a*scaling)

This gets you the overflow issue, of course;当然,这会给您带来溢出问题; the result:结果:

array([110, 240,  94,  46,  24], dtype=uint8)

The np.where works: np.where 有效:

result_lim=np.where(a*scaling<=255,a*scaling,255)

yields result_lim as:产生 result_lim 为:

array([ 110.,  240.,  255.,  255.,  255.])

I was wondering about timing, I did this test on a 4000 x 6000 image (instead of 1D array), and found the np.where(), at least for my conditions, took about 2.5x times as long.我想知道时间,我在 4000 x 6000 图像(而不是一维数组)上进行了这个测试,发现 np.where() 至少在我的条件下,花费了大约 2.5 倍的时间。 Didn't know if there was a better/faster way of doing this.不知道是否有更好/更快的方法来做到这一点。 The option of converting to float, doing the operation, and then clipping as noted above was a bit slower than the np.where() method.如上所述,转换为浮点数、执行操作然后进行裁剪的选项比 np.where() 方法要慢一些。

Don't know if there are better methods for this.不知道有没有更好的方法。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM