简体   繁体   中英

Accessing neighboring cells for numpy array

How can I access and modify the surrounding 8 cells for a 2D numpy array in an efficient manner?

I have a 2D numpy array like this:

arr = np.random.rand(720, 1440)

For each grid cell, I want to reduce by 10% of the center cell, the surrounding 8 cells (fewer for corner cells), but only if the surrounding cell value exceeds 0.25. I suspect that the only way to do this is using a for loop but would like to see if there are better/faster solutions.

-- EDIT: For loop based soln:

arr = np.random.rand(720, 1440)

for (x, y), value in np.ndenumerate(arr):
    # Find 10% of current cell
    reduce_by = value * 0.1

    # Reduce the nearby 8 cells by 'reduce_by' but only if the cell value exceeds 0.25
    # [0] [1] [2]
    # [3] [*] [5]
    # [6] [7] [8]
    # * refers to current cell

    # cell [0]
    arr[x-1][y+1] = arr[x-1][y+1] * reduce_by if arr[x-1][y+1] > 0.25 else arr[x-1][y+1]

    # cell [1]
    arr[x][y+1] = arr[x][y+1] * reduce_by if arr[x][y+1] > 0.25 else arr[x][y+1]

    # cell [2]
    arr[x+1][y+1] = arr[x+1][y+1] * reduce_by if arr[x+1][y+1] > 0.25 else arr[x+1][y+1]

    # cell [3]
    arr[x-1][y] = arr[x-1][y] * reduce_by if arr[x-1][y] > 0.25 else arr[x-1][y]

    # cell [4] or current cell
    # do nothing

    # cell [5]
    arr[x+1][y] = arr[x+1][y] * reduce_by if arr[x+1][y] > 0.25 else arr[x+1][y]

    # cell [6]
    arr[x-1][y-1] = arr[x-1][y-1] * reduce_by if arr[x-1][y-1] > 0.25 else arr[x-1][y-1]

    # cell [7]
    arr[x][y-1] = arr[x][y-1] * reduce_by if arr[x][y-1] > 0.25 else arr[x][y-1]

    # cell [8]
    arr[x+1][y-1] = arr[x+1][y-1] * reduce_by if arr[x+1][y-1] > 0.25 else arr[x+1][y-1]

Please clarify your question

  • Is it really intended that one loop iteration depends on the other, as mentioned by @jakevdp in the comments?
  • If this is the case, how exactly should be border pixels be handeled? This will affect the whole result due to the dependence from one loop iteration to the others
  • Please add a working reference implementation (You are getting an out of bounds error in your reference implementation)

Borders untouched, dependend loop iterations

I don't see any other way than using a compiler in this way. In this example I use Numba , but you can also do quite the same in Cython if this is preverred.

import numpy as np
import numba as nb

@nb.njit(fastmath=True)
def without_borders(arr):
  for x in range(1,arr.shape[0]-1):
    for y in range(1,arr.shape[1]-1):
      # Find 10% of current cell
      reduce_by = arr[x,y] * 0.1

      # Reduce the nearby 8 cells by 'reduce_by' but only if the cell value exceeds 0.25
      # [0] [1] [2]
      # [3] [*] [5]
      # [6] [7] [8]
      # * refers to current cell

      # cell [0]
      arr[x-1][y+1] = arr[x-1][y+1] * reduce_by if arr[x-1][y+1] > 0.25 else arr[x-1][y+1]

      # cell [1]
      arr[x][y+1] = arr[x][y+1] * reduce_by if arr[x][y+1] > 0.25 else arr[x][y+1]

      # cell [2]
      arr[x+1][y+1] = arr[x+1][y+1] * reduce_by if arr[x+1][y+1] > 0.25 else arr[x+1][y+1]

      # cell [3]
      arr[x-1][y] = arr[x-1][y] * reduce_by if arr[x-1][y] > 0.25 else arr[x-1][y]

      # cell [4] or current cell
      # do nothing

      # cell [5]
      arr[x+1][y] = arr[x+1][y] * reduce_by if arr[x+1][y] > 0.25 else arr[x+1][y]

      # cell [6]
      arr[x-1][y-1] = arr[x-1][y-1] * reduce_by if arr[x-1][y-1] > 0.25 else arr[x-1][y-1]

      # cell [7]
      arr[x][y-1] = arr[x][y-1] * reduce_by if arr[x][y-1] > 0.25 else arr[x][y-1]

      # cell [8]
      arr[x+1][y-1] = arr[x+1][y-1] * reduce_by if arr[x+1][y-1] > 0.25 else arr[x+1][y-1]
  return arr

Timings

arr = np.random.rand(720, 1440)

#non-compiled verson: 6.7s
#compiled version:    6ms (the first call takes about 450ms due to compilation overhead)

This is realy easy to do an gives about a factor of 1000x. Depending on the first 3 Points there might be some more optimizations possible.

No need for loops, avoid the usual python loops, they are very slow. For greater efficiency, rely on numpy's build in matrix operation, "universal" functions, filters, masks and conditions whenever you can. https://realpython.com/numpy-array-programmin For complicated computations vectorization is not too bad see some chart and benchmarks Most efficient way to map function over numpy array (just do not use it for simpler matrix operations, like squaring of cells, build in functions will overperform)

Easy to see that each internal cell would be multiplied on .9 up to 8 times due 8 neighbors (that is reduced by .1), and additionally due to be a central cell, yet it cannot be reduced below .25/.9 = 5/18. For border and corner cell number of decreases fells to 6 and 3 times.

Therefore

x1 = 700  # for debugging use lesser arrays
x2 = 1400

neighbors = 8 # each internal cell has 8 neighbors


for i in range(neighbors):
     view1 = arr[1:-1, 1:-1] # internal cells only
     arr [1:x1, 1:-1] = np.multiply(view1,.9, where = view1 > .25)

arr [1:-1, 1:-1] *= .9 

Borders and corners are be treated in same way with neighbours = 5 and 3 respectively and different views. I guess all three cases can be joined in one formula with complicated where case, yet speed up would be moderate, as borders and corners take a small fraction of all cells.

Here I used a small loop, yet it just 8 repetitions. It should be can get rid of the loop too, using power, log, integer part and max functions, resulting in a bit clumsy, but somewhat faster one-liner, something around

      numpy.multiply( view1, x ** numpy.max( numpy.ceil( (numpy.log (* view1/x... / log(.9)

We can also try another useful technique, vectorization. The vectorization is building a function which then can be applied to all the elements of the array.

For a change, lets preset margins/thresholds to find out exact coefficient to multiply on . Here is what code to look like

n = 8
decrease_by = numpy.logspace(1,N,num=n, base=x, endpoint=False)

margins = decrease_by * .25

# to do : save border rows for further analysis, skip this for simplicity now    
view1 = a [1: -1, 1: -1]

def decrease(x):


    k = numpy.searchsorted(margin, a)
    return x * decrease_by[k]

f = numpy.vectorize(decrease)
f(view1)

Remark 1 One can try use different combinations of approaches, eg use precomputed margins with matrix arithmetics rather than vectorization. Perhaps there are even more tricks to slightly speed up each of above solutions or combinations of above.

Remark 2 PyTorch has many similarity with Numpy functionality but can greatly benefit from GPU. If you have a decent GPU consider PyTorch. There were attempt on gpu based numpy (gluon, abandoned gnumpy, minpy) More on gpu's https://stsievert.com/blog/2016/07/01/numpy-gpu/

EDIT: ah, I see that when you say "reduce" you mean multiply, not subtract. I also failed to recognize that you want reductions to compound, which this solution does not do. So it's incorrect, but I'll leave it up in case it's helpful.

You can do this in a vectorized manner using scipy.signal.convolve2d :

import numpy as np
from scipy.signal import convolve2d

arr = np.random.rand(720, 1440)

mask = np.zeros((arr.shape[0] + 2, arr.shape[1] + 2))
mask[1:-1, 1:-1] = arr
mask[mask < 0.25] = 0
conv = np.ones((3, 3))
conv[1, 1] = 0

arr -= 0.1 * convolve2d(mask, conv, mode='valid')

This comes from thinking about your problem the other way around: each square should have 0.1 times all the surrounding values subtracted from it. The conv array encodes this, and we slide it over the mask array using scipy.signal.convolve2d to accumulate the values that should be subtracted.

Your size of the array is a typical screen size, so I guess that cells are pixel values in the range [0, 1). Now, pixel values are never multiplied by each other. If they were, operations would depend on the range (eg, [0, 1) or [0, 255]), but they never do. So I would assume that when you say “reduce by 10% of a cell” you mean “subtract 10% of a cell”. But even so, the operation remains dependent on the order it is applied to the cells, because the usual way of calculating the total variation of a cell first and then applying it (like in a convolution) would cause some cell values to become negative (eg, 0.251 - 8 * 0.1 * 0.999) , which does not make sense if they are pixels.

Let me assume for now that you really want to multiply cells by each other and by a factor, and that you want to do that by first having each cell affected by its neighbor number 0 (your numbering), then by its neighbor number 1, and so on for neighbors number 2, 3, 5, 7 and 8. As a rule, it's easier to define this kind of operations from the “point of view” of the target cells than from that of the source cells. Since numpy operates quickly on full arrays (or views thereof), the way to do this is to shift all neighbors in the position of the cell that is to be modified. Numpy has no shift() , but it has a roll() which for our purpose is just as good, because we don't care about the boundary cells, that, as per your comment, can be restored to the original value as a last step. Here is the code:

import numpy as np

arr = np.random.rand(720, 1440)
threshold = 0.25
factor    = 0.1
#                                                0 1 2
#                                    neighbors:  3   5
#                                                6 7 8
#                                                       ∆y  ∆x    axes
arr0 = np.where(arr  > threshold, arr  * np.roll(arr,   (1,  1), (0, 1)) * factor, arr)
arr1 = np.where(arr0 > threshold, arr0 * np.roll(arr0,   1,       0    ) * factor, arr0)
arr2 = np.where(arr1 > threshold, arr1 * np.roll(arr1,  (1, -1), (0, 1)) * factor, arr1)
arr3 = np.where(arr2 > threshold, arr2 * np.roll(arr2,       1,      1 ) * factor, arr2)
arr5 = np.where(arr3 > threshold, arr3 * np.roll(arr3,      -1,      1 ) * factor, arr3)
arr6 = np.where(arr5 > threshold, arr5 * np.roll(arr5, (-1,  1), (0, 1)) * factor, arr5)
arr7 = np.where(arr6 > threshold, arr6 * np.roll(arr6,  -1,       0    ) * factor, arr6)
res  = np.where(arr7 > threshold, arr7 * np.roll(arr7, (-1, -1), (0, 1)) * factor, arr7)
# fix the boundary:
res[:,  0] = arr[:,  0]
res[:, -1] = arr[:, -1]
res[ 0, :] = arr[ 0, :]
res[-1, :] = arr[-1, :]

Please note that even so, the main steps are different from what you do in your solution. But they necessarily are, because rewriting your solution in numpy would cause arrays to be read and written to in the same operation, and this is not something that numpy can do in a predictable way.

If you should change your mind, and decide to subtract instead of multiplying, you only need to change the column of * s before np.roll to a column of - s. But this would only be the first step in the direction of a proper convolution (a common and important operation on 2D images), for which you would need to completely reformulate your question, though.

Two notes: in your example code you indexed the array like arr[x][y] , but in numpy arrays, by default, the leftmost index is the most slowly varying one, ie, in 2D, the vertical one, so that the correct indexing is arr[y][x] . This is confirmed by the order of the sizes of your array. Secondly, in images, matrices, and in numpy, the vertical dimension is usually represented as increasing downwards. This causes your numbering of the neighbors to differ from mine. Just multiply the vertical shifts by -1 if necessary.


EDIT

Here is an alternative implementation that yields exactly the same results. It is slightly faster, but modifies the array in place:

arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[ :-2,  :-2] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[ :-2, 1:-1] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[ :-2, 2:  ] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[1:-1,  :-2] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[1:-1, 2:  ] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[2:  ,  :-2] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[2:  , 1:-1] * factor, arr[1:-1, 1:-1])
arr[1:-1, 1:-1] = np.where(arr[1:-1, 1:-1] > threshold, arr[1:-1, 1:-1] * arr[2:  , 2:  ] * factor, arr[1:-1, 1:-1])

This answer assumes that you really want to do exactly what you wrote in your question. Well, almost exactly, since your code crashes because indices get out of bounds. The easiest way to fix that is to add conditions, like, eg,

if x > 0 and y < y_max:
    arr[x-1][y+1] = ...

The reason why the main operation cannot be vectorized using numpy or scipy is that all cells are “reduced” by some neighbor cells that have already been “reduced”. Numpy or scipy would use the unaffected values of the neighbors on each operation. In my other answer I show how to do this with numpy if you are allowed to group operations in 8 steps, each along the direction of one particular neighbor, but each using the unaffected value in that step for that neighbor. As I said, here I presume you have to proceed sequentially.

Before I continue, let me swap x and y in your code. Your array has a typical screen size, where 720 is the height and 1440 the width. Images are usually stored by rows, and the rightmost index in an ndarray is, by default, the one that varies more rapidly, so everything makes sense. It's admittedly counter-intuitive, but the correct indexing is arr[y, x] .

The major optimization that can be applied to your code (that cuts execution time from ~9 s to ~3.9 s on my Mac) is not to assign a cell to itself when it's not necessary, coupled with in-place multiplication and with [y, x] instead of [y][x] indexing. Like this:

y_size, x_size = arr.shape
y_max, x_max = y_size - 1, x_size - 1
for (y, x), value in np.ndenumerate(arr):
    reduce_by = value * 0.1
    if y > 0 and x < x_max:
        if arr[y - 1, x + 1] > 0.25: arr[y - 1, x + 1] *= reduce_by
    if x < x_max:
        if arr[y    , x + 1] > 0.25: arr[y    , x + 1] *= reduce_by
    if y < y_max and x < x_max:
        if arr[y + 1, x + 1] > 0.25: arr[y + 1, x + 1] *= reduce_by
    if y > 0:
        if arr[y - 1, x    ] > 0.25: arr[y - 1, x    ] *= reduce_by
    if y < y_max:
        if arr[y + 1, x    ] > 0.25: arr[y + 1, x    ] *= reduce_by
    if y > 0 and x > 0:
        if arr[y - 1, x - 1] > 0.25: arr[y - 1, x - 1] *= reduce_by
    if x > 0:
        if arr[y    , x - 1] > 0.25: arr[y    , x - 1] *= reduce_by
    if y < y_max and x > 0:
        if arr[y + 1, x - 1] > 0.25: arr[y + 1, x - 1] *= reduce_by

The other optimization (that brings execution time further down to ~3.0 s on my Mac) is to avoid the boundary checks by using an array with extra boundary cells. We don't care what value the boundary contains, because it will never be used. Here is the code:

y_size, x_size = arr.shape
arr1 = np.empty((y_size + 2, x_size + 2))
arr1[1:-1, 1:-1] = arr
for y in range(1, y_size + 1):
    for x in range(1, x_size + 1):
        reduce_by = arr1[y, x] * 0.1
        if arr1[y - 1, x + 1] > 0.25: arr1[y - 1, x + 1] *= reduce_by
        if arr1[y    , x + 1] > 0.25: arr1[y    , x + 1] *= reduce_by
        if arr1[y + 1, x + 1] > 0.25: arr1[y + 1, x + 1] *= reduce_by
        if arr1[y - 1, x    ] > 0.25: arr1[y - 1, x    ] *= reduce_by
        if arr1[y + 1, x    ] > 0.25: arr1[y + 1, x    ] *= reduce_by
        if arr1[y - 1, x - 1] > 0.25: arr1[y - 1, x - 1] *= reduce_by
        if arr1[y    , x - 1] > 0.25: arr1[y    , x - 1] *= reduce_by
        if arr1[y + 1, x - 1] > 0.25: arr1[y + 1, x - 1] *= reduce_by
arr = arr1[1:-1, 1:-1]

For the records, if the operations could be vectorized using numpy or scipy, the speed-up with respect to this solution would be at least by a factor of 35 (measured on my Mac).

NB: if numpy did operations on array slices sequentially, the following would yield factorials (ie, products of positive integers up to a number) – but it does not:

>>> import numpy as np
>>> arr = np.arange(1, 11)
>>> arr
array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10])
>>> arr[1:] *= arr[:-1]
>>> arr
array([ 1,  2,  6, 12, 20, 30, 42, 56, 72, 90])

Try using pandas

import pandas as pd
# create random array as pandas DataFrame
df = pd.DataFrame(pd.np.random.rand(720, 1440))  
# define the centers location for each 9x9
Center_Locations = (df.index % 3 == 1,
                    df.columns.values % 3 == 1)
# new values for the centers, to be use later
df_center = df.iloc[Center_Locations] * 1.25
# change the df, include center
df = df * 0.9 
# replacing only the centers values   
df.iloc[Center_Locations] = df_center 

We can do this using linear indices. As described your implementation depends on how you iterate through the array. So I assume we want to fix the array, work out what to multiply each element by, then simply apply the multiplication. So it doesnt matter how we go through the array.

How much to multiply each element is given by:

1 if a[i,j] < 0.25 else np.prod(neighbours_a*0.1)

so we will first go through the whole array, and get the 8 neighbours of each element, multiply them together, with a factor of 0.1^8, and then apply a conditional elementwise multiplication of those values with a.

To do this we will use linear indexing, and offseting them. So for an array with m rows, n columns, the i,jth element has linear index i n + j. To move down a row we can just add n as the (i+1),jth element has linear index (i+1)n + j = (i n + j) + n. This arithmatic provides a good way to get the neighbours of every point, as the neighbours are all fixed offsets from each point.

import numpy as np

# make some random array
columns = 3
rows = 3
a = np.random.random([rows, columns])

# this contains all the reduce by values, as well as padding values of 1.
# on the top, bot left and right. we pad the array so we dont have to worry 
# about edge cases, when gathering neighbours. 
pad_row, pad_col = [1, 1], [1,1]
reduce_by = np.pad(a*0.1, [pad_row, pad_col], 'constant', constant_values=1.)

# build linear indices into the [row + 2, column + 2] array. 
pad_offset = 1
linear_inds_col = np.arange(pad_offset, columns + pad_offset)
linear_row_offsets = np.arange(pad_offset, rows + pad_offset)*(columns + 2*pad_offset)
linear_inds_for_array = linear_inds_col[None, :] + linear_row_offsets[:, None]

# get all posible row, col offsets, as linear offsets. We start by making
# normal indices eg. [-1, 1] up 1 row, along 1 col, then make these into single
# linear offsets such as -1*(columns + 2) + 1 for the [-1, 1] example
offsets = np.array(np.meshgrid([1, -1, 0], [1, -1, 0])).T.reshape([-1, 2])[:-1, :]
offsets[:,0] *= (columns + 2*pad_offset)
offsets = offsets.sum(axis=1)

# to every element in the flat linear indices we made, we just have to add
# the corresponding linear offsets, to get the neighbours
linear_inds_for_neighbours = linear_inds_for_array[:,:,None] + offsets[None,None,:]

# we can take these values from reduce by and multiply along the channels
# then the resulting [rows, columns] matrix will contain the potential
# total multiplicative factor to reduce by (if a[i,j] > 0.25)
relavent_values = np.take(reduce_by, linear_inds_for_neighbours)
reduce_by = np.prod(relavent_values, axis=2)

# do reduction
val_numpy = np.where(a > 0.25, a*reduce_by, a)

# check same as loop
val_loop = np.copy(a)
for i in range(rows):
    for j in range(columns):
        reduce_by = a[i,j]*0.1
        for off_row in range(-1, 2):
            for off_col in range(-1, 2):
                if off_row == 0 and off_col == 0:
                    continue
                if 0 <= (i + off_row) <= rows - 1 and 0 <= (j + off_col) <= columns - 1:
                    mult = reduce_by if a[i + off_row, j + off_col] > 0.25 else 1.
                    val_loop[i + off_row, j + off_col] *= mult


print('a')
print(a)
print('reduced np')
print(val_numpy)
print('reduce loop')
print(val_loop)
print('equal {}'.format(np.allclose(val_numpy, val_loop)))

It's not possible to avoid the loop because the reduction is performed sequentially, not in parallel.

Here's my implementation. For each (i,j) create 3x3 block-view of a centered at a[i,j] (the value of which I set temporarily to 0 so that it is below the threshold, since we don't want to reduce it). For the (i,j) at the boundary, the block is 2x2 at the corners and 2x3 or 3x2 elsewhere. Then the block is masked by the threshold and the unmasked elements are multiplied by a_ij*0.1 .

def reduce(a, threshold=0.25, r=0.1):
    for (i, j), a_ij in np.ndenumerate(a):
        a[i,j] = 0       
        block = a[0 if i == 0 else (i-1):i+2, 0 if j == 0 else (j-1):j+2]   
        np.putmask(block, block>threshold, block*a_ij*r)  
        a[i,j] = a_ij   
    return a

Note that the reduction is also performed from the boundary cells on the cells surrounding the them, ie the loop starts from the first corner of the array, a[0, 0] which has 3 neighbors: a[0,1] , a[1,0] and a[1,1] , which are reduced by a[0,0]*0.1 if they are > 0.25. Then it goes to the cell a[0,1] which has 5 neighbors etc. If you want to operate strictly on cells that have 8 neighbors, ie window of size 3x3, the loop should go from a[1,1] to a[-2, -2] , and the function should be modified as follows:

def reduce_(a, threshold=0.25, r=0.1):
    ''' without borders -- as in OP's solution'''
    for (i, j), a_ij in np.ndenumerate(a[1:-1,1:-1]):
        block = a[i:i+3, j:j+3]
        mask = ~np.diag([False, True, False])*(block > threshold)
        np.putmask(block, mask, block*a_ij*r)   
    return a

Example:

>>> a = np.random.rand(4, 4)
array([[0.55197876, 0.95840616, 0.88332771, 0.97894739],
       [0.06717366, 0.39165116, 0.10248439, 0.42335457],
       [0.73611318, 0.09655115, 0.79041814, 0.40971255],
       [0.34336608, 0.39239233, 0.14236677, 0.92172401]])

>>> reduce(a.copy())    
array([[0.00292008, 0.05290198, 0.00467298, 0.00045746],
       [0.06717366, 0.02161831, 0.10248439, 0.00019783],
       [0.00494474, 0.09655115, 0.00170875, 0.00419891],
       [0.00016979, 0.00019403, 0.14236677, 0.0001575 ]])

>>> reduce_(a.copy())
array([[0.02161831, 0.03753609, 0.03459563, 0.01003268],
       [0.06717366, 0.00401381, 0.10248439, 0.00433872],
       [0.02882996, 0.09655115, 0.03095682, 0.00419891],
       [0.00331524, 0.00378859, 0.14236677, 0.00285336]])

Another example for 3x2 array:

>>> a = np.random.rand(3, 2)
array([[0.17246979, 0.42743388],
       [0.1911065 , 0.41250723],
       [0.73389051, 0.22333497]])

>>> reduce(a.copy())
array([[0.17246979, 0.00737194],
       [0.1911065 , 0.0071145 ],
       [0.01402513, 0.22333497]])

>>> reduce_(a.copy())  # same as a because there are no cells with 8 neighbors
array([[0.17246979, 0.42743388],
       [0.1911065 , 0.41250723],
       [0.73389051, 0.22333497]])

By analyzing the problem to smaller ones, we see, that actully @jakevdp solution does the job, but forgets about checking the term mask<0.25 after convolution with the mask so that some values may drop later behind 0.25 (there maybe 8 tests for every pixel), so there must be a for loop, unless there's a built-in function for that I didn't heard of..

Here's my proposal:

# x or y first depends if u want rows or cols , .. different results
for x in range(arr.shape[1]-3):
    for y in range(arr.shape[0]-3):
        k = arr[y:y+3,x:x+3]
        arr[y:y+3,x:x+3] = k/10**(k>0.25)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM