简体   繁体   English

Pandas 不同的聚合计数

[英]Pandas aggregate count distinct

Let's say I have a log of user activity and I want to generate a report of the total duration and the number of unique users per day.假设我有一个用户活动日志,我想生成一份关于总持续时间和每天唯一用户数的报告。

import numpy as np
import pandas as pd
df = pd.DataFrame({'date': ['2013-04-01','2013-04-01','2013-04-01','2013-04-02', '2013-04-02'],
    'user_id': ['0001', '0001', '0002', '0002', '0002'],
    'duration': [30, 15, 20, 15, 30]})

Aggregating duration is pretty straightforward:聚合持续时间非常简单:

group = df.groupby('date')
agg = group.aggregate({'duration': np.sum})
agg
            duration
date
2013-04-01        65
2013-04-02        45

What I'd like to do is sum the duration and count distincts at the same time, but I can't seem to find an equivalent for count_distinct:我想做的是同时对持续时间和不同计数求和,但我似乎找不到 count_distinct 的等效项:

agg = group.aggregate({ 'duration': np.sum, 'user_id': count_distinct})

This works, but surely there's a better way, no?这行得通,但肯定有更好的方法,不是吗?

group = df.groupby('date')
agg = group.aggregate({'duration': np.sum})
agg['uv'] = df.groupby('date').user_id.nunique()
agg
            duration  uv
date
2013-04-01        65   2
2013-04-02        45   1

I'm thinking I just need to provide a function that returns the count of distinct items of a Series object to the aggregate function, but I don't have a lot of exposure to the various libraries at my disposal.我想我只需要提供一个 function,它将 object 系列的不同项目的计数返回到聚合 function,但我没有太多接触可供我使用的各种库。 Also, it seems that the groupby object already knows this information, so wouldn't I just be duplicating the effort?此外,groupby object 似乎已经知道此信息,所以我不会只是重复工作吗?

How about either of: 如何:

>>> df
         date  duration user_id
0  2013-04-01        30    0001
1  2013-04-01        15    0001
2  2013-04-01        20    0002
3  2013-04-02        15    0002
4  2013-04-02        30    0002
>>> df.groupby("date").agg({"duration": np.sum, "user_id": pd.Series.nunique})
            duration  user_id
date                         
2013-04-01        65        2
2013-04-02        45        1
>>> df.groupby("date").agg({"duration": np.sum, "user_id": lambda x: x.nunique()})
            duration  user_id
date                         
2013-04-01        65        2
2013-04-02        45        1

自熊猫0.20.0起,'nunique'是.agg()的选项,因此:

df.groupby('date').agg({'duration': 'sum', 'user_id': 'nunique'})

Just adding to the answers already given, the solution using the string "nunique" seems much faster, tested here on ~21M rows dataframe, then grouped to ~2M 仅添加到已经给出的答案中,使用字符串"nunique"的解决方案似乎要快得多,在此处对"nunique"行数据帧进行了测试,然后分组为"nunique"

%time _=g.agg({"id": lambda x: x.nunique()})
CPU times: user 3min 3s, sys: 2.94 s, total: 3min 6s
Wall time: 3min 20s

%time _=g.agg({"id": pd.Series.nunique})
CPU times: user 3min 2s, sys: 2.44 s, total: 3min 4s
Wall time: 3min 18s

%time _=g.agg({"id": "nunique"})
CPU times: user 14 s, sys: 4.76 s, total: 18.8 s
Wall time: 24.4 s

If you want to get only a number of distinct values per group you can use the method nunique directly with the DataFrameGroupBy object:如果您只想获得每个组的多个不同值,您可以直接将方法nuniqueDataFrameGroupBy object 一起使用:

df.groupby('date')['user_id'].nunique()

You can find it for all columns at once with the aggregate method,您可以使用aggregate方法一次为所有列找到它,

df.aggregate(func=pd.Series.nunique, axis=0)
# or
df.aggregate(func='nunique', axis=0)

See aggregate |查看聚合 | Pandas Docs Pandas 文档

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM