![](/img/trans.png)
[英]Drop consecutive duplicates in Pandas dataframe if repeated more than n times
[英]Count how many times a value of a column changes for more than n consecutive times, together with the changes, with group by, and condition in pandas
我有一個pandas
dataframe:
import pandas as pd
foo = pd.DataFrame({'id': ['a','a','a','a','a','b','b','b','b','b', 'c','c','c','c'],
'week': [1,2,3,4,5,3,4,5,6,7,1,2,3,4],
'col': [1,1,2,2,1,4,3,3,3,4, 6,6,7,7],
'confidence': ['h','h','h','l','h','h','h','h','h','h', 'h','h','l','l']})
我想計算col
的值變化了多少次( n_changes
)(連同前一個值( from
)和新值( to
)),只有當新值連續出現大於或等於n
次並且有在這n
個連續時間中至少有一個'h'
。 我想通過id
做到這一點
如果n=3
, output 應該如下所示:
id from to n_changes
b 4 3 1
因為:
b
, 3
出現在4
3 次或更多次之后,並且在這3 or more consecutive times
出現至少一個h
如果n=2
, output 應該如下所示:
id from to n
a 1 2 1
b 4 3 1
因為:
a
, 2
出現在1
2 次或更多次之后,並且在這2 or more consecutive times
出現至少一個h
b
, 3
出現在4
2 次或更多次之后,並且在這2 or more consecutive times
出現至少一個h
c
沒有出現在 output 中,因為即使7
在6
之后連續出現2 or more consecutive times
,在這2 or more consecutive times
中也沒有至少 1 h
有沒有辦法做到這一點? 有任何想法嗎?
更新
我已經嘗試過n=2
test['next_col'] = test.groupby(['id'])['col'].transform('shift', periods=-1)
test['next_next_col'] = test.groupby(['id'])['col'].transform('shift', periods=-2)
test['next_confidence'] = test.groupby(['id'])['confidence'].transform('shift', periods=-1)
test['next_next_confidence'] = test.groupby(['id'])['confidence'].transform('shift', periods=-2)
test['n_h'] = (test['next_confidence'] == 'h').apply(lambda x: int(x)) + (test['next_next_confidence'] == 'h').apply(lambda x: int(x))
final_test = test[test.eval('next_col == next_next_col and n_h > =1 and col!= next_col')]
final_test['helper'] = 1
final_test['n'] = final_test.groupby(['id','col','next_col'])['helper'].transform('sum')
final_test[['id','col','next_col', 'n']].rename(columns={'col': 'from',
'next_col': 'to'})
給出 output
id from to n
1 a 1 2.0 1
5 b 4 3.0 1
哪個是對的。 但是有沒有更有效的方法呢?
這是一種方法。 關鍵思想是建立一個run_no
值來標識連續col
值的每次運行(在給定的id
內)。 請注意,沒有groupby(...).apply(some_python_function)
,因此即使在大df
上也可能相當快。
# first, let's establish a "run_no" which is distinct for each
# run of same 'col' for a given 'id'.
# we also set a 'is_h' for later .any() operation, plus a few useful columns:
cols = ['id', 'col']
z = df.assign(
from_=df.groupby('id')['col'].shift(1, fill_value=-1),
to=df['col'],
run_no=(df[cols] != df[cols].shift(1)).any(axis=1).cumsum(),
is_h=df['confidence'] == 'h')
# next, make a mask that selects the rows we are interested in
gb = z.groupby(['id', 'run_no'])
mask = (gb.size() >= n) & (gb['is_h'].any() & (gb.first()['from_'] != -1))
# finally, we select according to that mask, and add n_changes:
out = gb.first().loc[mask].reset_index()
out = out.assign(n_changes=out.groupby(['id', 'from_', 'to']).size().values)[['id', 'from_', 'to', 'n_changes']]
結果, n = 2
:
>>> out
id from_ to n_changes
0 a 1 2 1
1 b 4 3 1
並且n = 1
:
>>> out
id from_ to n_changes
0 a 1 2 1
1 a 2 1 1
2 b 4 3 1
3 b 3 4 1
注意:如果您對中間值感興趣,您當然可以檢查z
(獨立於n
)和mask
(取決於n
)。 例如,對於z
:
>>> z
id week col confidence from_ to run_no is_h
0 a 1 1 h -1 1 1 True
1 a 2 1 h 1 1 1 True
2 a 3 2 h 1 2 2 True
3 a 4 2 l 2 2 2 False
4 a 5 1 h 2 1 3 True
5 b 3 4 h -1 4 4 True
6 b 4 3 h 4 3 5 True
7 b 5 3 h 3 3 5 True
8 b 6 3 h 3 3 5 True
9 b 7 4 h 3 4 6 True
10 c 1 6 h -1 6 7 True
11 c 2 6 h 6 6 7 True
12 c 3 7 l 6 7 8 False
13 c 4 7 l 7 7 8 False
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.