简体   繁体   中英

How to group 2 dataframes based on different conditions and columns in pandas

I am having a dataframe with the below example 数据框1

数据框2

What I wanted to achieve is to combine 2 dataframes based on ColA and also the values in ColC should match between each columns (that is to check whether the value is present in the list) . Could you please suggest an efficient and simple approach to solve this problem? I know this can be done in a normal way by looping through the rows of dataframe 1 and comparing values. But I feel that there should be some other good approach (panda way) to solve the problem.

Thank you in advance

I will using unnesting here .

df1['ListCol']=df1['ColC']# Here I am try to record the original data 
Yourdf=unnesting(df1,['ColC']).merge(df2, on=['ColA','ColC'],how='inner')
Yourdf
   ColC ColA  ColB    ListCol
0     2    A     1  [1, 2, 3]
1     3    A     1  [1, 2, 3]
2     6    A     2  [4, 5, 6]
3     2    B     4  [1, 2, 3]
4     5    B     5  [3, 4, 5]

def unnesting(df, explode):
    idx = df.index.repeat(df[explode[0]].str.len())
    df1 = pd.concat([
        pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
    df1.index = idx

    return df1.join(df.drop(explode, 1), how='left')

You can do it this way, expand ColC in dataframe one, df1, then melt that in to one column the merge on ColA and "melted" column in df1:

df1 = pd.DataFrame({'ColA':[*'AABBB'], 
                    'ColB':[1,2,3,4,5], 
                    'ColC':[[1,2,3],[4,5,6],[7,8,9],[1,2,3],[3,4,5]]})

df2 = pd.DataFrame({'ColA':[*'AAABB'], 'ColC':[3,6,2,2,5]})

df1_m = df1.assign(**pd.DataFrame([i for i in df1['ColC'].values]).add_prefix('ColC_'))\
           .melt(['ColA','ColB','ColC'])

df_out = df2.merge(df1_m, left_on=['ColA','ColC'], right_on=['ColA','value'])
df_out

Output:

  ColA  ColC_x  ColB     ColC_y variable  value
0    A       3     1  [1, 2, 3]   ColC_2      3
1    A       6     2  [4, 5, 6]   ColC_2      6
2    A       2     1  [1, 2, 3]   ColC_1      2
3    B       2     4  [1, 2, 3]   ColC_1      2
4    B       5     5  [3, 4, 5]   ColC_2      5

Another way is using merge on ColA and apply with python in operator to pick only rows where ColC_y is in ColC_x

In [19]: df1
Out[19]:
  ColA  ColB       ColC
0    A     1  [1, 2, 3]
1    A     2  [4, 5, 6]
2    B     3  [7, 8, 9]
3    B     4  [1, 2, 3]
4    B     5  [3, 4, 5]

In [20]: df2
Out[20]:
  ColA  ColC
0    A     3
1    A     6
2    A     2
3    B     2
4    B     5

In [21]: df3 = df1.merge(df2, on=['ColA'])

In [22]: df3
Out[22]:
   ColA  ColB     ColC_x  ColC_y
0     A     1  [1, 2, 3]       3
1     A     1  [1, 2, 3]       6
2     A     1  [1, 2, 3]       2
3     A     2  [4, 5, 6]       3
4     A     2  [4, 5, 6]       6
5     A     2  [4, 5, 6]       2
6     B     3  [7, 8, 9]       2
7     B     3  [7, 8, 9]       5
8     B     4  [1, 2, 3]       2
9     B     4  [1, 2, 3]       5
10    B     5  [3, 4, 5]       2
11    B     5  [3, 4, 5]       5

In [23]: df3[df3.apply(lambda x: x['ColC_y'] in x['ColC_x'], axis=1)]
Out[23]:
   ColA  ColB     ColC_x  ColC_y
0     A     1  [1, 2, 3]       3
2     A     1  [1, 2, 3]       2
4     A     2  [4, 5, 6]       6
8     B     4  [1, 2, 3]       2
11    B     5  [3, 4, 5]       5

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM