简体   繁体   中英

Get top N values in an inner level in a MultiIndex based dataframe

I have a Pandas MultiIndex DataFrame that was converted from a xarray Dataset with 3 dimensions being time, latitude and longitude and two variables "FFDI" and "REF_ID"). Time = 17696, daily from 1972-01-20 to 2020-06-30) and latitude (=148) and longitude (=244)

The dataframe looks like:

                                    FFDI    REF_ID
latitude    longitude   time        
-39.200001  140.800003  2009-02-07  10.2    0
                        2009-01-30  10.1    0
                        1983-02-12  10.0    0
                        2003-01-13  9.8     0
                        2019-12-28  9.8     0
                        2000-01-17  9.7     0
            ...     ...     ...     ...     ...

-33.900002  150.000000  ... ...     ...     ...
                        1994-06-16  0.9     36111
                        1978-07-07  0.2     36111
                        2020-08-28  0.1     36111
                        2007-06-09  0.0     36111
                        1994-07-30  0.0     36111
                        1987-06-21  0.0     36111
                        
639037952 rows × 2 columns

The DataFrame has already been sorted descending on "FFDI". What I want to achieve is get top N (say 3) "time" rows for each latitude and longitude.

So the DataFrame will look like if N = 3:

                                    FFDI    REF_ID
latitude    longitude   time        
-39.200001  140.800003  2009-02-07  10.2    0
                        2009-01-30  10.1    0
                        1983-02-12  10.0    0
-39.200001  140.83786   2001-01-03  10.5    0
                        2006-01-18  10.3    0
                        2009-02-07  10.2    0
            ...     ...     ...     ...     ...

-33.900002  150.000000  2009-02-07  10.9    36111
                        2006-01-10  10.7    36111
                        1983-01-23  10.6    36111

Give this a shot:

df.groupby(level=['latitude','longitude'],
           group_keys=False).apply(lambda x: x.nlargest(n=3,columns=['FFDI','REF_ID']))

The group_keys=False is necessary because you're using the MultiIndex to group, and if set to True -- which is the default -- the groupby() would redundantly add those keys to the index of the output.

I created a smaller dataset:

import numpy as np, pandas as pd

latitudes = [-39.200001,-39.200001,-39.200002]*10
longitudes = [140.800003,140.83786,150.000000]*10
sequence = [0,1,5,0,1,2,4,50,0,7]
times = pd.date_range(start='2020-06-01',end='2020-06-30')
 
s = pd.Series(
        np.random.randn(len(sequence)*3),
        index=pd.MultiIndex.from_tuples(zip(latitudes,longitudes,times),
                                        names=['latitude','longitude','time'])
    )

df = pd.DataFrame(s,columns=['FFDI'])
df['REF_ID'] = np.random.randint(0,36111,len(sequence) * 3)

Then tested:

In [48]: df.groupby(level=['latitude','longitude'],
                    group_keys=False).apply(lambda x: x.nlargest(n=3,columns=['FFDI','REF_ID']))
Out[48]: 
                                      FFDI  REF_ID
latitude   longitude  time                        
-39.200002 150.000000 2020-06-09  1.658600   32650
                      2020-06-24  1.412439    6124
                      2020-06-06  0.248274   15765
-39.200001 140.800003 2020-06-13  0.906517    6980
                      2020-06-25  0.757745   27483
                      2020-06-04  0.671170   31313
           140.837860 2020-06-20  1.162408   20113
                      2020-06-14  1.014437   34023
                      2020-06-11  0.657841    8366

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM