简体   繁体   中英

Efficiently find matching rows (based on content) in a pandas DataFrame

I am writing some tests and I am using Pandas DataFrames to house a large dataset ~(600,000 x 10). I have extracted 10 random rows from the source data (using Stata) and now I want to write a test see if those rows are in the DataFrame in my test suite.

As a small example

np.random.seed(2)
raw_data = pd.DataFrame(np.random.rand(5,3), columns=['one', 'two', 'three'])
random_sample = raw_data.ix[1]

Here raw_data is:

在此输入图像描述

And random_sample is derived to guarantee a match and is:

在此输入图像描述

Currently I have written:

for idx, row in raw_data.iterrows():
    if random_sample.equals(row):
        print "match"
        break

Which works but on the large dataset is very slow. Is there a more efficient way to check if an entire row is contained in the DataFrame?

BTW: My example also needs to be able to compare np.NaN equality which is why I am using the equals() method

equals doesn't seem to broadcast, but we can always do the equality comparison manually:

>>> df = pd.DataFrame(np.random.rand(600000, 10))
>>> sample = df.iloc[-1]
>>> %timeit df[((df == sample) | (df.isnull() & sample.isnull())).all(1)]
1 loops, best of 3: 231 ms per loop
>>> df[((df == sample) | (df.isnull() & sample.isnull())).all(1)]
              0         1         2         3         4         5         6  \
599999  0.07832  0.064828  0.502513  0.851816  0.976464  0.761231  0.275242   

               7        8         9  
599999  0.426393  0.91632  0.569807  

which is much faster than the iterative version for me (which takes > 30s.)

But since we have lots of rows and relatively few columns, we could loop over the columns, and in the typical case probably cut down substantially on the number of rows to be looked at. For example, something like

def finder(df, row):
    for col in df:
        df = df.loc[(df[col] == row[col]) | (df[col].isnull() & pd.isnull(row[col]))]
    return df

gives me

>>> %timeit finder(df, sample)
10 loops, best of 3: 35.2 ms per loop

which is roughly an order of magnitude faster, because after the first column there's only one row left.

(I think I once had a much slicker way to do this but for the life of me I can't remember it now.)

The best I have come up with is to take a filtering approach which seems to work quite well and prevents a lot of comparisons when the dataset is large:

tmp = raw_data    
for idx, val in random_sample.iteritems():
    try:
        if np.isnan(val):
            continue
    except:
        pass
    tmp = tmp[tmp[idx] == val]
if len(tmp) == 1: print "match"

Note: This is actually a slower for the above small dimensional example. But on a large dataset this ~9 times faster than the basic iteration

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM