简体   繁体   中英

Pandas aggregate then get group average

My goal is to do an aggregation and get the groupwise average : - sum of column values which belong to specific group - divide that by number of observations in that group - preferably in Pandas instead of going to R

My original data set has multiple rows per group:

    user_id  performance  group expert_level
0   164         30          0      L-1
1   164          3          1      L-1
2   164         23          2      L-1
3   164          1          3      L-1       
4   164          1          4      L-1       
5  2178        136          0      L-3       
6  2178         16          1      L-3       
7  2178          5          2      L-3       
8  2178         25          3      L-3       
9  2178          4          4      L-3 

I wanted to get one row for user so after doing the following operations

filelocation = ~/'somefile.csv'

df = pd.read_csv(filelocation)

pivoted = df.pivot('user_id', 'group', 'performance')
lookup = df.drop_duplicates('user_id')[['user_id', 'expert_level']]
lookup.set_index(['user_id'], inplace=True)
result = pivoted.join(lookup)
result = result.fillna(0)
result.loc[:,0:15] = result.loc[:,0:15].div(result.sum(axis=1), axis=0)
print result.head()

Above operations get me following: (there 15 columns but shown only few, but included the group column)

                0         1         2         3         4         5         6      group
user_id                                                                         
2        0.863296  0.059643  0.023498  0.018470  0.022241  0.004797  0.000795       L-5
4        0.836877  0.018336  0.049429  0.025246  0.052706  0.002436  0.004075       L-2
16       0.910467  0.046083  0.017775  0.011192  0.011192  0.000658  0.000000       L-4
50       0.754286  0.137143  0.064762  0.009524  0.034286  0.000000  0.000000       L-5
51       0.401827  0.120086  0.041260  0.085395  0.286462  0.032434  0.001232       L-1

Now what I want to do is sum all columns by group level to get following:

group   X0        X1        X2        X3        X4       X5       X6      X7       X8  
L-4   70294161  41480184  85284328  32006784  24122706  7559884  9984039 1226385 13104 
L-3  139093997  65157598 158343549  55562729  40113567 12062095 15126124 1642933 18661 
L-6  286610049 214763097 383541227 175932665 152843219 49444750 54246772 5863108 78769 
L-5   43320302  29719739  58270825  24719553  19347706  5876604  7483654  789694  8734  
L-2   69965163  23882048  80798434  26442583  16951986  4495711  5789449  550780  7190  
L-1   22486756   5373632  26068005   7755806   4204398   950759  1626565  123037  2156  

After I did pandas I brought the file into R to get the above table:

dt.agg <- dt[,lapply(.SD, mean),by=group]

But as you can see aggregated numbers doesn't make sense, How can I get the same table using pandas instead of R. Because I feel like R is doing something strange. Those numbers should be between 0-1.

I even tried the following:

dt.agg <- dt[, lapply(.SD, function(x){sum(x)/.N}), by = group]

Yet the results are same so I want to completely do this in Pandas instead going to R.

PS: I have dropped the user_id : df$user_id <- NULL

Try:

> ddt
   user_id       X0       X1       X2       X3       X4       X5       X6 group
1:       2 0.863296 0.059643 0.023498 0.018470 0.022241 0.004797 0.000795   L-5
2:       4 0.836877 0.018336 0.049429 0.025246 0.052706 0.002436 0.004075   L-2
3:      16 0.910467 0.046083 0.017775 0.011192 0.011192 0.000658 0.000000   L-4
4:      50 0.754286 0.137143 0.064762 0.009524 0.034286 0.000000 0.000000   L-5
5:      51 0.401827 0.120086 0.041260 0.085395 0.286462 0.032434 0.001232   L-1

> ddt[,lapply(ddt[,2:8,with=F], mean),by=group]
   group        X0        X1        X2        X3        X4       X5        X6
1:   L-5 0.7533506 0.0762582 0.0393448 0.0299654 0.0813774 0.008065 0.0012204
2:   L-2 0.7533506 0.0762582 0.0393448 0.0299654 0.0813774 0.008065 0.0012204
3:   L-4 0.7533506 0.0762582 0.0393448 0.0299654 0.0813774 0.008065 0.0012204
4:   L-1 0.7533506 0.0762582 0.0393448 0.0299654 0.0813774 0.008065 0.0012204

Actually, your own code also works:

> ddt[,lapply(.SD, mean),by=group]
   group user_id       X0       X1       X2       X3        X4        X5        X6
1:   L-5      26 0.808791 0.098393 0.044130 0.013997 0.0282635 0.0023985 0.0003975
2:   L-2       4 0.836877 0.018336 0.049429 0.025246 0.0527060 0.0024360 0.0040750
3:   L-4      16 0.910467 0.046083 0.017775 0.011192 0.0111920 0.0006580 0.0000000
4:   L-1      51 0.401827 0.120086 0.041260 0.085395 0.2864620 0.0324340 0.0012320

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM