简体   繁体   中英

get the mean of colum after concatenating adding one column at the end of another in pandas

I have a dataset that looks like this :

    Interactor A    Interactor B    Interaction Score   score2
0   P02574  P39205  0.928736    0.375000
1   P02574  Q6NR18  0.297354    0.166667
2   P02574  Q7KML4  0.297354    0.142857
3   P02574  Q9BP34  0.297354    0.166667
4   P02574  Q9BP35  0.297354    0.16666

data.shape = (112049, 5)

I want to add Interactor B at the end of Interactor A column uniquely and add a column that shows their Rank. I did this by :

cols = [data[col].squeeze() for col in data[['Interactor A','Interactor B']]]
n =pd.concat(cols, ignore_index=True)
n = pd.DataFrame(n,columns = ['AB'])

to make the column unique :

t = pd.unique(n['AB'])
t= pd.DataFrame(t, columns=[ "AB"])

then :

t2 = n.groupby(['AB'],sort=False).size()
t2 = pd.DataFrame(t2)

finally : by concatenating t2 and t :

data_1 = pd.concat([t,l], axis=1)


AB  Rank
0   P02574  4


data.shape = (13631, 2)

now I want to add the Interaction Score and score2 column to DF . if there is duplicate take the mean of their Interaction Score and delete the duplicates and replace the value of the Interaction Score by the mean.

I used :

score2 = data.groupby(['Interactor A','Interactor B'])['score2'].mean()
score2 = pd.DataFrame(score2, columns=['score2']) 

the output in this case is like :

        score2
Interactor A    Interactor B    
A0A023GPK8  Q9VQW1  0.200000
A0A076NAB7  Q9VYN8  0.000000
A0A0B4JD97  Q400N2  0.000000
Q9VC64  0.090909
Q9VNE4  0.307692

112049 rows × 1 columns

but what I is to add columns with mean of 'score2' and 'Interaction Score' column for 13631 unique data that I made. How can achieve this ?? please help. the final df should be like :

Interactor Rank Interaction Score score2 P02574 5 0.928736 0.44

ie: score2 is the average of all 'P0257' score that have been in the dataset

IIUC - You simply need to reshape your data from wide to long and then run aggregation assuming scores pair with interactors one for one. Consider wide_to_long for reshape after setting up stub names and id field. Then, run groupby().agg() for counts and means.

Data

from io import StringIO
import pandas as pd    

txt = '''    "Interactor A"    "Interactor B"    "Interaction Score"   "score2"
0   P02574  P39205  0.928736    0.375000
1   P02574  Q6NR18  0.297354    0.166667
2   P02574  Q7KML4  0.297354    0.142857
3   P02574  Q9BP34  0.297354    0.166667
4   P02574  Q9BP35  0.297354    0.16666'''

data = pd.read_csv(StringIO(txt), sep="\s+")

Reshape

# FOR id FIELD
data["id"] = data.index

# FOR STUB NAMES
data = data.rename(columns={"Interaction Score": "score A",
                            "score2": "score B"})

df_long = pd.wide_to_long(data, ["Interactor", "score"], i="id", 
                           j="score_type", sep=" ", suffix="(A|B)")

df_long
#               Interactor     score
# id score_type                     
# 0  A              P02574  0.928736
# 1  A              P02574  0.297354
# 2  A              P02574  0.297354
# 3  A              P02574  0.297354
# 4  A              P02574  0.297354
# 0  B              P39205  0.375000
# 1  B              Q6NR18  0.166667
# 2  B              Q7KML4  0.142857
# 3  B              Q9BP34  0.166667
# 4  B              Q9BP35  0.166660

Interactor Aggregation

df_long.groupby(["Interactor"])["score"].agg(["count", "mean"])

#            count      mean
# Interactor
# P02574         5  0.423630
# P39205         1  0.375000
# Q6NR18         1  0.166667
# Q7KML4         1  0.142857
# Q9BP34         1  0.166667
# Q9BP35         1  0.166660

Interactor + Score Groupby Aggregation

df_long.groupby(["Interactor", "score_type"])['score'].agg(["count", "mean"])

#                        count      mean
# Interactor score_type                 
# P02574     A               5  0.423630
# P39205     B               1  0.375000
# Q6NR18     B               1  0.166667
# Q7KML4     B               1  0.142857
# Q9BP34     B               1  0.166667
# Q9BP35     B               1  0.166660

Interactor + Score Pivot Aggregation

df_long.pivot_table(index="Interactor", columns="score_type", values='score',
                    aggfunc = ["count", "mean"])

#            count          mean          
# score_type     A    B        A         B
# Interactor                              
# P02574       5.0  NaN  0.42363       NaN
# P39205       NaN  1.0      NaN  0.375000
# Q6NR18       NaN  1.0      NaN  0.166667
# Q7KML4       NaN  1.0      NaN  0.142857
# Q9BP34       NaN  1.0      NaN  0.166667
# Q9BP35       NaN  1.0      NaN  0.166660

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM