簡體   English   中英

如何在 groupby pandas 數據框中保留前綴最頻繁的值?

[英]How to keep the values with most frequent prefix in a groupby pandas dataframe?

假設我有這個數據框:

    Country Market
0   Spain   m1_name
1   Spain   m1_location
2   Spain   m1_size
3   Spain   m2_location
4   USA     m1_name
5   USA     m2_name
6   USA     m3_size
7   USA     m3_location

我想對“國家”列進行分組,並在 groupby 對象中保留最頻繁記錄的記錄。 預期的結果是:

    Country Market
0   Spain   m1_name
1   Spain   m1_location
2   Spain   m1_size
6   USA     m3_size
7   USA     m3_location

我已經嘗試過提取前綴,然后在數據幀上獲取前綴的模式,並用這種模式合並行,但我覺得存在更直接和更有效的解決方案。

以下是可重現結果的工作示例代碼:

df = pd.DataFrame({
    "Country": ["Spain","Spain","Spain","Spain","USA","USA","USA","USA"],
    "City": ["m1_name","m1_location","m1_size","m2_location","m1_name","m2_name","m3_size","m3_location"]                
                    })
df['prefix'] = df['City'].str[1]
modes = df.groupby('Country')['prefix'].agg(pd.Series.mode).rename("modes")
df = df.merge(modes, how="right", left_on=['Country','prefix'], right_on=['Country',"modes"])
df = df.drop(['modes','prefix'], axis = 1)
print(df)

Country         City
0   Spain      m1_name
1   Spain  m1_location
2   Spain      m1_size
3     USA      m3_size
4     USA  m3_location

您可以嘗試 groupby 並應用於過濾組行

out = (df.assign(prefix=df['City'].str.split('_').str[0])
       .groupby('Country')
       .apply(lambda g: g[g['prefix'].isin(g['prefix'].mode())])
       .reset_index(drop=True)
       .drop('prefix',axis=1))
print(out)

  Country         City
0   Spain      m1_name
1   Spain  m1_location
2   Spain      m1_size
3     USA      m3_size
4     USA  m3_location

利用:

In [575]: df['Prefix_count'] = df.groupby(['Country', df.City.str.split('_').str[0]])['City'].transform('size')

In [589]: idx = df.groupby('Country')['Prefix_count'].transform(max) == df['Prefix_count']

In [593]: df[idx].drop('Prefix_count', 1)
Out[593]: 
  Country         City
0   Spain      m1_name
1   Spain  m1_location
2   Spain      m1_size
6     USA      m3_size
7     USA  m3_location

關於下面提出的解決方案的一個有趣事實是 Mayank 的解決方案要快得多。 我在我的數據上運行了 1000 行並得到:

Mayank 的解決方案: 0.020 seconds
Ynjxsjmh 的解決方案: 0.402 seconds
我的(OP)解決方案: 0.122 seconds

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM