简体   繁体   中英

How to “group by” multiple variables, eliminating duplicates, with Python pandas

I have an input file with this sort of data:

**> Due big size of input file, i need to take only unique pairs -

userID-locationID (some kind of preprocessing)**

userID locationID
     1       loc1 
     1       loc2 
     1       loc3 
     2       loc1 
     3       loc4 
     3       loc3 
     3       loc1

I have to find how many distinct users are checked on each location and get new column with values. I already tried this but it is not what I need.

DataFrame({'count': df.groupby(["userID","locationID",]).size()}).reset_index()

This should be what you are looking for, but I'm not sure if there's an easier way:

In [5]: df.groupby(['locID','userId']).last().groupby(level='locID').size()
Out[5]: 
locID
loc1     3
loc2     1
loc3     2
loc4     1
dtype: int64

Taking the last of each group will remove duplicats

There's a Series (groupby) method just for this: nunique .

In [11]: df  # Note the duplicated row I appended at the end
Out[11]:
   userID locationID
0       1       loc1
1       1       loc2
2       1       loc3
3       2       loc1
4       3       loc4
5       3       loc3
6       3       loc1
7       3       loc1

In [12]: g = df.groupby('locationID')

In [13]: g['userID'].nunique()
Out[13]:
locationID
loc1          3
loc2          1
loc3          2
loc4          1
dtype: int64

Solution:

df.groupby(['locID']).size()

returns:

locID
loc1     3
loc2     1
loc3     2
loc4     1

Try it for yourself:

import pandas

txt = '''userID  locationID
 1         loc1 
 1         loc2 
 1         loc3 
 2         loc1 
 3         loc4 
 3         loc3 
 3         loc1'''


listtxt = list(txt.splitlines())
columns = tuple(filter(None, listtxt.pop(0).split()))
vals = [tuple(filter(None, line.split())) for line in listtxt]
df = pandas.DataFrame(vals, columns=columns)

df now returns:

  userID locationID
0      1       loc1
1      1       loc2
2      1       loc3
3      2       loc1
4      3       loc4
5      3       loc3
6      3       loc1

and

df.groupby(['locationID']).size()

returns:

locationID
loc1          3
loc2          1
loc3          2
loc4          1
import pandas as pn

df = pn.DataFrame({'userId': pn.Series([1,1,1,2,3,3,3]),        
                  'locID': pn.Series(['loc1', 'loc2', 'loc3', 'loc1', 'loc4', 'loc3','loc1'])})     
print df.groupby(['locID']).count().userId

OUTPUT:

locID       
loc1        3
loc2        1
loc3        2
loc4        1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM