简体   繁体   中英

How to set overflowed subtraction in python to result in a zero?

I am trying to find regions which have features strikingly different from the baseline.

To do that I subtract current image from the base, f and b are grayscale 2d image matrices.

diff = f - b

Some of the oeprations result in overflow and this leads to regions of high pixel value where really they should be set to zero.

在此处输入图片说明

How do I specify that the operation diff = f - b should yield 0 for individual pixel value if f[x][y] < b[x][y] ?

Here is one way of doing it in numpy that doesn't require casting to a larger integer type:

f - b.clip(None, f)

or, equivalently,

f - np.minimum(b, f)

I fixed that by making my own function which compares each pixel before subtracting it to prevent any overflow from occuring.

def custom_sub(i2,i1):      
    x =  len(i1)
    y =  len(i1[0])

    o = deepcopy(i1)

    for ix in range(x):             
            for iy in range(y):
                if i1[ix][iy] > i2[ix][iy]:
                    o[ix][iy] = 0
                else:
                    o[ix][iy] = i2[ix][iy]-i1[ix][iy]
    return o

This is the output, bright regions now only where the spark flash is occurring.

在此处输入图片说明

This question is similiar to mine. Subtraction in my case was performed with uint8 type, this can be converted to int16. Resulting picture can then be iterated over and any negative number removed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM