简体   繁体   English

Pandas groupby 根据列值和组大小份额选择前 N 行

[英]Pandas groupby select top N rows based on column value AND group size share

I have the following data:我有以下数据:

    group   cluster probabilityA    probabilityB
0   a   0   0.28    0.153013
1   a   0   0.28    0.133686
2   a   0   0.28    0.058366
3   a   0   0.28    0.091937
4   a   1   0.50    0.040095
5   a   1   0.50    0.150359
6   a   2   0.32    0.043512
7   a   2   0.32    0.088408
8   a   2   0.32    0.005158
9   a   2   0.32    0.107054
10  a   2   0.32    0.029050
11  a   2   0.32    0.099361
12  b   0   0.40    0.057752
13  b   0   0.40    0.177103
14  b   1   0.60    0.218634
15  b   1   0.60    0.098535
16  b   1   0.60    0.065746
17  b   1   0.60    0.190805
18  b   1   0.60    0.191425

What I want to do, is to select top 5 (arbitrary number, can be N) of rows per each group based on probabilityB AND on the share of the sizes of every cluster .我想要做的是根据每个组的probabilityB和每个cluster的大小份额选择每组的前 5 行(任意数,可以是 N)。 If we only look at group a , there are 3 clusters: 0, 1 and 2. Their respective size shares are:如果我们只看a组,则有 3 个集群:0、1 和 2。它们各自的大小份额是:

group  cluster
a      0          0.333333
       1          0.166667
       2          0.500000
Name: probabilityA, dtype: float64

And here, if I want top 5 rows based on this shares I would take在这里,如果我想要基于此份额的前 5 行,我会选择

(round
      (df
            .groupby(["group", "cluster"])["probabilityA"]
            .count() / 
       df
            .groupby(["group", "cluster"])["probabilityA"]
            .count()
            .sum(level = 0) 
       * 5)

group  cluster
a      0          2.0
       1          1.0
       2          2.0

2 elements from cluster 0 and 2, and only 1 element from cluster 1 based on probabilityB column.集群 0 和集群 2 中的 2 个元素,基于probabilityB列,集群 1 中只有 1 个元素。 So, my result will look like this (index is irrelevant in the sample below):因此,我的结果将如下所示(索引与下面的示例无关):

    group   cluster probabilityA    probabilityB
0   a   1   0.50    0.150359
1   a   2   0.32    0.107054
2   a   2   0.32    0.088408
3   a   0   0.28    0.153013
4   a   0   0.28    0.133686
5   b   0   0.40    0.177103
6   b   1   0.60    0.218634
7   b   1   0.60    0.191425
8   b   1   0.60    0.190805
9   b   1   0.60    0.098535

Is there a way I can achieve it?有没有办法实现它?

thanks in advance!提前致谢!

I think, the most clear solution is to divide tke task into steps:我认为,最明确的解决方案是将 tke 任务分为几个步骤:

  1. Generate counts for each top-level group:为每个顶级组生成计数:

     c1 = df.groupby(["group"])["probabilityA"].count().rename('c1')

    For your data, the result is:对于您的数据,结果是:

     group a 12 b 7 Name: c1, dtype: int64
  2. Set the number of rows to take from each top-level group:设置从每个顶级组中获取的行数:

     N = 5
  3. Generate the counts of rows to take from each second-level group:生成要从每个二级组中获取的行数:

     cnt = df.groupby(["group", "cluster"])["probabilityA"].count().rename('c2')\\ .reset_index(level=1).join(c1).set_index('cluster', append=True)\\ .apply(lambda row: N * row.c2 / row.c1, axis=1).round().astype(int)

    For your data, the result is:对于您的数据,结果是:

     group cluster a 0 2 1 1 2 2 b 0 1 1 4 dtype: int32
  4. Then define the function, retutning the respective number of "top" rows:然后定义函数,重新调整“顶部”行的相应数量:

     def takeFirst(grp): grpKey = tuple(grp.iloc[0, 0:2]) grpCnt = cnt.loc[grpKey] return grp.nlargest(grpCnt, 'probabilityB')
  5. And the last step is to compute the result:最后一步是计算结果:

     df.groupby(['group', 'cluster']).apply(takeFirst)

    For your data, the result is:对于您的数据,结果是:

     group cluster probabilityA probabilityB group cluster a 0 0 a 0 0.28 0.153013 1 a 0 0.28 0.133686 1 5 a 1 0.50 0.150359 2 9 a 2 0.32 0.107054 11 a 2 0.32 0.099361 b 0 13 b 0 0.40 0.177103 1 14 b 1 0.60 0.218634 18 b 1 0.60 0.191425 17 b 1 0.60 0.190805 15 b 1 0.60 0.098535

I delibarately left group and cluster as index columns, to ease the identification from which group they were taken, but in the final version you can append .reset_index(level=[0,1], drop=True) to drop them.我故意将groupcluster作为索引列,以方便识别它们来自哪个组,但在最终版本中,您可以附加.reset_index(level=[0,1], drop=True)来删除它们。

I think if you groupby ProbabilityA - you might be able to achieve this.我认为如果你分组 ProbabilityA - 你可能能够实现这一目标。

df.groupby(['group', 'cluster', 'probabilityA']).aggregate({
    'group': 'first',
    'cluster': 'first',
    'probabilityA': lambda x: round(len(x)/(sum(x)*(len(x))*n),
    'probabilityB': lambda x: sum(x)
})

The solution above was faulty because count().sum() is different on overall groupby and on probabilityA alone separately, which is why I did the following:上面的解决方案是错误的,因为 count().sum() 在整体 groupby 和仅在probabilityA 上是不同的,这就是我执行以下操作的原因:

UPDATE - Full Solution:更新- 完整解决方案:

  1. Sort your dataframe:对数据框进行排序:
df.sort_values(by=['group', 'cluster','probabilityB'], ascending=False)
  1. Create Counts of Objects in a separate grouped dataframe:在单独的分组数据框中创建对象计数:
cluster = pd.DataFrame(round(df.groupby(['group', 'cluster', 'probabilityA'])['probabilityA'].count() 
          / df.groupby(['group', 'cluster', 'probabilityA'])['probabilityB'].count().sum(level=0)*5))
cluster.reset_index(level=['group', 'cluster', 'probabilityA'], inplace=True)
cluster = cluster.rename(columns={0: 'counts'})
cluster['counts'] = pd.to_numeric(cluster['counts'], downcast='integer')
  1. Create New Dataframe with Sort ProbabilityB:使用 Sort ProbabilityB 创建新数据框:
output = pd.concat(cluster.apply(lambda x: df.loc[(df['group'] == x['group']) & (df['cluster'] == x['cluster'])].groupby(
    ['group', 'cluster']).head(x['counts']), axis=1).tolist())

Output: See Output DataFrame Here输出:请参阅此处的输出数据帧

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM