简体   繁体   English

根据多列聚合函数的条件结果计算唯一记录

[英]Count unique records based on conditional result of aggregate functions on multiple columns

My data looks like this:我的数据如下所示:

df = pd.DataFrame({'ID': [1, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4,
                          4, 4, 5, 5, 5],
                   'group': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B',
                             'B', 'B', 'B', 'B', 'B', 'B'],
                   'attempts': [0, 1, 1, 1, 1, 1, 1, 0, 1,
                                1, 1, 1, 0, 0, 1, 0],
                   'successes': [1, 0, 0, 0, 0, 0, 0, 1, 0,
                                 0, 0, 0, 1, 1, 0, 1],
                   'score': [None, 5, 5, 4, 5, 4, 5, None, 1, 5,
                             0, 1, None, None, 1, None]})

## df output
   ID group attempts successes score
0   1     A        0         1  None
1   1     A        1         0     5
2   1     A        1         0     5
3   1     A        1         0     4
4   2     A        1         0     5
5   2     A        1         0     4
6   3     A        1         0     5
7   3     A        0         1  None
8   3     A        1         0     1
9   4     B        1         0     5
10  4     B        1         0     0
11  4     B        1         0     1
12  4     B        0         1  None
13  5     B        0         1  None
14  5     B        1         0     1
15  5     B        0         1  None

I'm trying to group by two columns ( group , score ) and count the number of unique ID after first identifying which groups of ( group , ID ) have at least 1 successes count across all score values.我试图按两列( groupscore )分组,并在首先确定( groupID )的哪些组在所有score值中至少有 1 个successes计数之后计算唯一ID的数量。 In other words, I only want to count the ID once (unique) in the aggregation if it has at least one associated success.换句话说,如果 ID 至少有一个关联成功,我只想在聚合中计算一次(唯一)ID。 I also only want to only count unique IDs per each ( group , ID ) pair regardless of the number of attempt_counts it contains (ie if there's a sum of 5 success counts, I only want to include 1).我也只想计算每个 ( group , ID ) 对的唯一 ID,而不attempt_counts包含的尝试计数的数量(即,如果有 5 个成功计数的总和,我只想包括 1 个)。

The successes and attempts columns are binary (only 1 or 0). successesattempts列是二进制的(只有 1 或 0)。 For example, for ID = 1, group = A, there is at least 1 success.例如,对于 ID = 1、group = A,至少有 1 次成功。 Therefore, when counting the number of unique IDs per ( group , score ), I will include that ID .因此,在计算每个( groupscore )的唯一 ID 数时,我将包括该ID

I'd like the final output to look something like this so that I can calculate the ratio of unique successes to unique attempts for each ( group , score ) combination.我希望最终的 output看起来像这样,这样我就可以计算每个( groupscore )组合的独特成功与独特尝试的比率。

group score successes_count attempts_counts ratio
    A     5              2                3  0.67
          4              1                2  0.50
          1              1                1   1.0              
          0              0                0   inf
    B     5              1                1   1.0
          4              0                0   inf
          1              2                2   1.0
          0              1                1   1.0

So far I've been able to run a pivot table to sums per ( group , ID ) to identify those IDs that have at least 1 success.到目前为止,我已经能够运行 pivot 表来计算每个( groupID )的总和,以识别那些至少有 1 次成功的 ID。 However, I'm not sure the best way to use this to reach my desired final state.但是,我不确定使用它来达到我想要的最终 state 的最佳方法。

p = pd.pivot_table(data=df_new,
                values=['ID'],
                index=['group', 'ID'],
                columns=['successes', 'attempts'],
                aggfunc={'ID': 'count'})
# p output
            ID     
successes    0    1
attempts     1    0
group ID           
A     1    3.0  1.0
      2    2.0  NaN
      3    2.0  1.0
B     4    3.0  1.0
      5    1.0  2.0

Let's try something like:让我们尝试一下:

import numpy as np
import pandas as pd

df = pd.DataFrame({'ID': [1, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4,
                          4, 4, 5, 5, 5],
                   'group': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B',
                             'B', 'B', 'B', 'B', 'B', 'B'],
                   'attempts': [0, 1, 1, 1, 1, 1, 1, 0, 1,
                                1, 1, 1, 0, 0, 1, 0],
                   'successes': [1, 0, 0, 0, 0, 0, 0, 1, 0,
                                 0, 0, 0, 1, 1, 0, 1],
                   'score': [None, 5, 5, 4, 5, 4, 5, None, 1, 5,
                             0, 1, None, None, 1, None]})

# Groups With At least 1 Success
m = df.groupby('group')['successes'].transform('max').astype(bool)
# Filter Out
df = df[m]

# Replace 0 successes with NaNs
df['successes'] = df['successes'].replace(0, np.nan)
# FFill BFill each group so that any success will fill the group
df['successes'] = df.groupby(['ID', 'group'])['successes'] \
    .apply(lambda s: s.ffill().bfill())

# Pivot then stack to make sure each group has all score values
# Sort and reset index
# Rename Columns
# fix types
p = df.drop_duplicates() \
    .pivot_table(index='group',
                 columns='score',
                 values=['attempts', 'successes'],
                 aggfunc='sum',
                 fill_value=0) \
    .stack() \
    .sort_values(['group', 'score'], ascending=[True, False]) \
    .reset_index() \
    .rename(columns={'attempts': 'attempts_counts',
                     'successes': 'successes_count'}) \
    .convert_dtypes()

# Calculate Ratio
p['ratio'] = p['successes_count'] / p['attempts_counts']
print(p)

Output: Output:

  group  score  attempts_counts  successes_count     ratio
0     A      5                3                2  0.666667
1     A      4                2                1       0.5
2     A      1                1                1       1.0
3     A      0                0                0       NaN
4     B      5                1                1       1.0
5     B      4                0                0       NaN
6     B      1                2                2       1.0
7     B      0                1                1       1.0

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM