[英]Filter in group by in pandas
I have the following dataframe我有以下 dataframe
df = pd.DataFrame(dict(g = [0, 0, 1, 1, 2, 2], x = [0, 1, 1, 2, 2, 3]))
And I want to obtain a subset of this dataframe with the groups from g
such that mean(x) > 0.6
.我想获得这个 dataframe 的一个子集,其中的组来自
g
使得mean(x) > 0.6
。 That is, I want a filter_group
operation to obtain the following dataframe:也就是我想要一个
filter_group
操作得到如下dataframe:
>>> filtered_df = filter_group(df)
>>> filtered_df
g x
2 1 1
3 1 2
4 2 2
5 2 3
Is there an easy way to do this in pandas?在 pandas 中是否有一种简单的方法可以做到这一点? This is similar to the
having
operation in SQL, but a bit different since I want to obtain a dataframe with the same schema, but less rows.这类似于 SQL
having
操作,但有点不同,因为我想获得具有相同架构但行数更少的 dataframe。
For R users, what I'm trying to do is:对于 R 用户,我想做的是:
library(dplyr)
df <- tibble(
g = c(0, 0, 1, 1, 2, 2),
x = c(0, 1, 1, 2, 2, 3)
)
df %>%
group_by(g) %>%
filter(mean(x) > 0.6)
Use GroupBy.transform
for reepat aggregate values per groups for possible filter original values in boolean indexing
:使用
GroupBy.transform
为每组重复聚合值,以获得boolean indexing
中可能的过滤器原始值:
df[df.groupby('g')['x'].transform('mean') > 0.6]
This solution is better if large DataFrame or many groups if performance is important:如果大型 DataFrame 或许多组如果性能很重要,则此解决方案更好:
np.random.seed(2020)
N = 10000
df = pd.DataFrame(dict(g = np.random.randint(1000, size=N),
x = np.random.randint(10000, size=N)))
print (df)
In [89]: %timeit df[df.groupby('g')['x'].transform('mean') > 0.6]
2.01 ms ± 103 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [90]: %timeit df.groupby('g').filter(lambda df: df['x'].mean() > 0.6)
145 ms ± 2.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
By looking at it, an alternative way to do it is by using the filter
method:通过查看它,另一种方法是使用
filter
方法:
df.groupby('g').filter(lambda df: df['x'].mean() > 0.6)
To me this has the following advantages:对我来说,这有以下优点:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.