简体   繁体   中英

Aggregate unique values from multiple columns with pandas GroupBy

I went into countless threads ( 1 2 3 ...) and still I don't find a solution to my problem... I have a dataframe like this:

prop1 prop2 prop3    prop4 
L30   3     bob      11.2
L30   54    bob      10
L30   11    john     10
L30   10    bob      10
K20   12    travis   10 
K20   1     travis   4 
K20   66    leo      10

I would like to do a groupby on prop1, AND at the same time, get all the other columns aggregated, but only with unique values. Like that:

prop1  prop2       prop3       prop4
L30    3,54,11,10  bob,john    11.2,10
K20    12,1,66     travis,leo  10,4

I tried with different methods:

  1. df.groupby('prop1')['prop2','prop3','prop4'].apply(np.unique) returns

AttributeError: 'numpy.ndarray' object has no attribute 'index' PLUS TypeError: Series.name must be a hashable type

  1. Also: .apply(lambda x: pd.unique(x.values.ravel()).tolist()) which gives a list as output, and I would like columns.

  2. df.groupby('prop1')['prop2','prop3','prop4'].unique() by itself doesn't work because there are multiple columns.

  3. .apply(f) with f being:

    def f(df): df['prop2']=df['prop2'].drop_duplicates() df['prop3']=df['prop3'].drop_duplicates() df['prop4']=df['prop4'].drop_duplicates() return df

doesn't do anything.

  1. I also tried to use .agg() with different options but didn't get success.

Does one of you would have any idea?

Thank you very much :)

Use groupby and agg , and aggregate only unique values by calling Series.unique :

df.astype(str).groupby('prop1').agg(lambda x: ','.join(x.unique()))

            prop2       prop3      prop4
prop1                                   
K20       12,1,66  travis,leo   10.0,4.0
L30    3,54,11,10    bob,john  11.2,10.0

df.astype(str).groupby('prop1', sort=False).agg(lambda x: ','.join(x.unique()))

            prop2       prop3      prop4
prop1                                   
L30    3,54,11,10    bob,john  11.2,10.0
K20       12,1,66  travis,leo   10.0,4.0

If handling NaNs is important, call fillna in advance:

import re
df.fillna('').astype(str).groupby('prop1').agg(
    lambda x: re.sub(',+', ',', ','.join(x.unique()))
)

            prop2       prop3      prop4
prop1                                   
K20       12,1,66  travis,leo   10.0,4.0
L30    3,54,11,10    bob,john  11.2,10.0

melt + pivot_table

s = df.astype(str).melt(id_vars='prop1').drop_duplicates()

s.pivot_table(
  index='prop1',
  columns='variable',
  values='value',
  aggfunc=','.join)

variable       prop2       prop3      prop4
prop1
K20          12,1,66  travis,leo   10.0,4.0
L30       3,54,11,10    bob,john  11.2,10.0

Try this, it worked for me perfectly:

df.groupby(['prop1','prop2', 'prop4']).agg(lambda x: ','.join(x.unique())).reset_index()

This will give the result as:

prop1  prop2       prop3       prop4
L30    3,54,11,10  bob,john    11.2,10
K20    12,1,66     travis,leo  10,4

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM