简体   繁体   中英

Pandas groupby select top N rows based on column value AND group size share

I have the following data:

    group   cluster probabilityA    probabilityB
0   a   0   0.28    0.153013
1   a   0   0.28    0.133686
2   a   0   0.28    0.058366
3   a   0   0.28    0.091937
4   a   1   0.50    0.040095
5   a   1   0.50    0.150359
6   a   2   0.32    0.043512
7   a   2   0.32    0.088408
8   a   2   0.32    0.005158
9   a   2   0.32    0.107054
10  a   2   0.32    0.029050
11  a   2   0.32    0.099361
12  b   0   0.40    0.057752
13  b   0   0.40    0.177103
14  b   1   0.60    0.218634
15  b   1   0.60    0.098535
16  b   1   0.60    0.065746
17  b   1   0.60    0.190805
18  b   1   0.60    0.191425

What I want to do, is to select top 5 (arbitrary number, can be N) of rows per each group based on probabilityB AND on the share of the sizes of every cluster . If we only look at group a , there are 3 clusters: 0, 1 and 2. Their respective size shares are:

group  cluster
a      0          0.333333
       1          0.166667
       2          0.500000
Name: probabilityA, dtype: float64

And here, if I want top 5 rows based on this shares I would take

(round
      (df
            .groupby(["group", "cluster"])["probabilityA"]
            .count() / 
       df
            .groupby(["group", "cluster"])["probabilityA"]
            .count()
            .sum(level = 0) 
       * 5)

group  cluster
a      0          2.0
       1          1.0
       2          2.0

2 elements from cluster 0 and 2, and only 1 element from cluster 1 based on probabilityB column. So, my result will look like this (index is irrelevant in the sample below):

    group   cluster probabilityA    probabilityB
0   a   1   0.50    0.150359
1   a   2   0.32    0.107054
2   a   2   0.32    0.088408
3   a   0   0.28    0.153013
4   a   0   0.28    0.133686
5   b   0   0.40    0.177103
6   b   1   0.60    0.218634
7   b   1   0.60    0.191425
8   b   1   0.60    0.190805
9   b   1   0.60    0.098535

Is there a way I can achieve it?

thanks in advance!

I think, the most clear solution is to divide tke task into steps:

  1. Generate counts for each top-level group:

     c1 = df.groupby(["group"])["probabilityA"].count().rename('c1')

    For your data, the result is:

     group a 12 b 7 Name: c1, dtype: int64
  2. Set the number of rows to take from each top-level group:

     N = 5
  3. Generate the counts of rows to take from each second-level group:

     cnt = df.groupby(["group", "cluster"])["probabilityA"].count().rename('c2')\\ .reset_index(level=1).join(c1).set_index('cluster', append=True)\\ .apply(lambda row: N * row.c2 / row.c1, axis=1).round().astype(int)

    For your data, the result is:

     group cluster a 0 2 1 1 2 2 b 0 1 1 4 dtype: int32
  4. Then define the function, retutning the respective number of "top" rows:

     def takeFirst(grp): grpKey = tuple(grp.iloc[0, 0:2]) grpCnt = cnt.loc[grpKey] return grp.nlargest(grpCnt, 'probabilityB')
  5. And the last step is to compute the result:

     df.groupby(['group', 'cluster']).apply(takeFirst)

    For your data, the result is:

     group cluster probabilityA probabilityB group cluster a 0 0 a 0 0.28 0.153013 1 a 0 0.28 0.133686 1 5 a 1 0.50 0.150359 2 9 a 2 0.32 0.107054 11 a 2 0.32 0.099361 b 0 13 b 0 0.40 0.177103 1 14 b 1 0.60 0.218634 18 b 1 0.60 0.191425 17 b 1 0.60 0.190805 15 b 1 0.60 0.098535

I delibarately left group and cluster as index columns, to ease the identification from which group they were taken, but in the final version you can append .reset_index(level=[0,1], drop=True) to drop them.

I think if you groupby ProbabilityA - you might be able to achieve this.

df.groupby(['group', 'cluster', 'probabilityA']).aggregate({
    'group': 'first',
    'cluster': 'first',
    'probabilityA': lambda x: round(len(x)/(sum(x)*(len(x))*n),
    'probabilityB': lambda x: sum(x)
})

The solution above was faulty because count().sum() is different on overall groupby and on probabilityA alone separately, which is why I did the following:

UPDATE - Full Solution:

  1. Sort your dataframe:
df.sort_values(by=['group', 'cluster','probabilityB'], ascending=False)
  1. Create Counts of Objects in a separate grouped dataframe:
cluster = pd.DataFrame(round(df.groupby(['group', 'cluster', 'probabilityA'])['probabilityA'].count() 
          / df.groupby(['group', 'cluster', 'probabilityA'])['probabilityB'].count().sum(level=0)*5))
cluster.reset_index(level=['group', 'cluster', 'probabilityA'], inplace=True)
cluster = cluster.rename(columns={0: 'counts'})
cluster['counts'] = pd.to_numeric(cluster['counts'], downcast='integer')
  1. Create New Dataframe with Sort ProbabilityB:
output = pd.concat(cluster.apply(lambda x: df.loc[(df['group'] == x['group']) & (df['cluster'] == x['cluster'])].groupby(
    ['group', 'cluster']).head(x['counts']), axis=1).tolist())

Output: See Output DataFrame Here

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM