簡體   English   中英

Python:計算pandas系列中值的累積出現次數

[英]Python: Counting cumulative occurrences of values in a pandas series

我有一個看起來像這樣的DataFrame:

    fruit
0  orange
1  orange
2  orange
3    pear
4  orange
5   apple
6   apple
7    pear
8    pear
9  orange

我想添加一個列,計算每個值的累積次數,即

    fruit  cum_count
0  orange          1
1  orange          2
2  orange          3
3    pear          1
4  orange          4
5   apple          1
6   apple          2
7    pear          2
8    pear          3
9  orange          5

目前我這樣做:

df['cum_count'] = [(df.fruit[0:i+1] == x).sum() for i, x in df.fruit.iteritems()]

...這對10行很好,但是當我嘗試用幾百萬行做同樣的事情時需要很長時間。 有沒有更有效的方法來做到這一點?

你可以使用groupbycumcount

df['cum_count'] = df.groupby('fruit').cumcount() + 1

In [16]: df
Out[16]:
    fruit  cum_count
0  orange          1
1  orange          2
2  orange          3
3    pear          1
4  orange          4
5   apple          1
6   apple          2
7    pear          2
8    pear          3
9  orange          5

定時

In [8]: %timeit [(df.fruit[0:i+1] == x).sum() for i, x in df.fruit.iteritems()]
100 loops, best of 3: 3.76 ms per loop

In [9]: %timeit df.groupby('fruit').cumcount() + 1
1000 loops, best of 3: 926 µs per loop

所以它的速度提高了4倍。

也許更好的是將groupbycumcount一起使用指定列,因為它更有效:

df['cum_count'] = df.groupby('fruit' )['fruit'].cumcount() + 1
print df

    fruit  cum_count
0  orange          1
1  orange          2
2  orange          3
3    pear          1
4  orange          4
5   apple          1
6   apple          2
7    pear          2
8    pear          3
9  orange          5

比較len(df) = 10 ,我的解決方案是最快的:

In [3]: %timeit df.groupby('fruit')['fruit'].cumcount() + 1
The slowest run took 11.67 times longer than the fastest. This could mean that an intermediate result is being cached 
1000 loops, best of 3: 299 µs per loop

In [4]: %timeit df.groupby('fruit').cumcount() + 1
The slowest run took 12.78 times longer than the fastest. This could mean that an intermediate result is being cached 
1000 loops, best of 3: 921 µs per loop

In [5]: %timeit [(df.fruit[0:i+1] == x).sum() for i, x in df.fruit.iteritems()]
The slowest run took 4.47 times longer than the fastest. This could mean that an intermediate result is being cached 
100 loops, best of 3: 2.72 ms per loop

比較len(df) = 10k

In [7]: %timeit df.groupby('fruit')['fruit'].cumcount() + 1
The slowest run took 4.65 times longer than the fastest. This could mean that an intermediate result is being cached 
1000 loops, best of 3: 845 µs per loop

In [8]: %timeit df.groupby('fruit').cumcount() + 1
The slowest run took 5.59 times longer than the fastest. This could mean that an intermediate result is being cached 
100 loops, best of 3: 1.59 ms per loop

In [9]: %timeit [(df.fruit[0:i+1] == x).sum() for i, x in df.fruit.iteritems()]
1 loops, best of 3: 5.12 s per loop

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM