简体   繁体   中英

Efficiently delete each row of an array if it occurs in another array in pure numpy

I have one numpy array, where indices are stored in the shape of (n, 2) . Eg:

[[0, 1],
 [2, 3], 
 [1, 2], 
 [4, 2]]

Then I do some processing and create an array in the shape of (m, 2) , where n > m . Eg:

[[2, 3]
 [4, 2]]

Now I want to delete every row in the first array that can be found in the second array as well. So my wanted result is:

[[0, 1], 
 [1, 2]]

My current solution is as follows:

for row in second_array:
        result = np.delete(first_array, np.where(np.all(first_array == second_array, axis=1)), axis=0)

However, this is quiet time consuming if the second is large. Does someone know a numpy only solution, which does not require a loop?

Here's one leveraging the fact that they are positive numbers using matrix-multiplication for dimensionality-reduction -

def setdiff_nd_positivenums(a,b):
    s = np.maximum(a.max(0)+1,b.max(0)+1)
    return a[~np.isin(a.dot(s),b.dot(s))]

Sample run -

In [82]: a
Out[82]: 
array([[0, 1],
       [2, 3],
       [1, 2],
       [4, 2]])

In [83]: b
Out[83]: 
array([[2, 3],
       [4, 2]])

In [85]: setdiff_nd_positivenums(a,b)
Out[85]: 
array([[0, 1],
       [1, 2]])

Also, it seems the second-array b is a subset of a . So, we can leverage that scenario to boost the performance even further using np.searchsorted , like so -

def setdiff_nd_positivenums_searchsorted(a,b):
    s = np.maximum(a.max(0)+1,b.max(0)+1)
    a1D,b1D = a.dot(s),b.dot(s)
    b1Ds = np.sort(b1D)
    return a[b1Ds[np.searchsorted(b1Ds,a1D)] != a1D]

Timings -

In [146]: np.random.seed(0)
     ...: a = np.random.randint(0,9,(1000000,2))
     ...: b = a[np.random.choice(len(a), 10000, replace=0)]

In [147]: %timeit setdiff_nd_positivenums(a,b)
     ...: %timeit setdiff_nd_positivenums_searchsorted(a,b)
10 loops, best of 3: 101 ms per loop
10 loops, best of 3: 70.9 ms per loop

For generic numbers, here's another using views -

# https://stackoverflow.com/a/45313353/ @Divakar
def view1D(a, b): # a, b are arrays
    a = np.ascontiguousarray(a)
    b = np.ascontiguousarray(b)
    void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
    return a.view(void_dt).ravel(),  b.view(void_dt).ravel()

def setdiff_nd(a,b):
    # a,b are the nD input arrays
    A,B = view1D(a,b)    
    return a[~np.isin(A,B)]

Sample run -

In [94]: a
Out[94]: 
array([[ 0,  1],
       [-2, -3],
       [ 1,  2],
       [-4, -2]])

In [95]: b
Out[95]: 
array([[-2, -3],
       [ 4,  2]])

In [96]: setdiff_nd(a,b)
Out[96]: 
array([[ 0,  1],
       [ 1,  2],
       [-4, -2]])

Timings -

In [158]: np.random.seed(0)
     ...: a = np.random.randint(0,9,(1000000,2))
     ...: b = a[np.random.choice(len(a), 10000, replace=0)]

In [159]: %timeit setdiff_nd(a,b)
1 loop, best of 3: 352 ms per loop

Here is a function that works with 2D arrays of integers with any shape, and accepting both positive and negative numbers:

import numpy as np

# Gets a boolean array of rows of a that are in b
def isin_rows(a, b):
    a = np.asarray(a)
    b = np.asarray(b)
    # Subtract minimum value per column
    min = np.minimum(a.min(0), b.min(0))
    a = a - min
    b = b - min
    # Get maximum value per column
    max = np.maximum(a.max(0), b.max(0))
    # Compute multiplicative base for each column
    base = np.roll(max, 1)
    base[0] = 1
    base = np.cumprod(max)
    # Make flattened version of arrays
    a_flat = (a * base).sum(1)
    b_flat = (b * base).sum(1)
    # Check elements of a in b
    return np.isin(a_flat, b_flat)

# Test
a = np.array([[0, 1],
              [2, 3],
              [1, 2],
              [4, 2]])
b = np.array([[2, 3],
              [4, 2]])
a_in_b_mask = isin_rows(a, b)
a_not_in_b = a[~a_in_b_mask]
print(a_not_in_b)
# [[0 1]
#  [1 2]]

EDIT: One possible optimization raises from considering the number of possible rows in b . If b has more rows than the possible number of combinations, then you may find its unique elements first so np.isin is faster:

import numpy as np

def isin_rows_opt(a, b):
    a = np.asarray(a)
    b = np.asarray(b)
    min = np.minimum(a.min(0), b.min(0))
    a = a - min
    b = b - min
    max = np.maximum(a.max(0), b.max(0))
    base = np.roll(max, 1)
    base[0] = 1
    base = np.cumprod(max)
    a_flat = (a * base).sum(1)
    b_flat = (b * base).sum(1)
    # Count number of possible different rows for b
    num_possible_b = np.prod(b.max(0) - b.min(0) + 1)
    if len(b_flat) > num_possible_b:  # May tune this condition
        b_flat = np.unique(b_flat)
    return np.isin(a_flat, b_flat)

The condition len(b_flat) > num_possible_b should probably be tuned better so you only find for unique elements if it is really going to be worth it (maybe len(b_flat) > 2 * num_possible_b or len(b_flat) > num_possible_b + CONSTANT ). It seems to give some improvement for big arrays with fewer values:

import numpy as np

# Test setup from @Divakar
np.random.seed(0)
a = np.random.randint(0, 9, (1000000, 2))
b = a[np.random.choice(len(a), 10000, replace=0)]
print(np.all(isin_rows(a, b) == isin_rows_opt(a, b)))
# True
%timeit isin_rows(a, b)
# 100 ms ± 425 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit isin_rows_opt(a, b)
# 81.2 ms ± 324 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

The numpy-indexed package (disclaimer: I am its author) was designed to perform operations of this type efficiently on nd-arrays.

import numpy_indexed as npi
# if the output should consist of unique values and there is no need to preserve ordering
result = npi.difference(first_array, second_array)
# otherwise:
result = first_array[~npi.in_(first_array, second_array)]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM