[英]Python - compare two columns in dataframe
I have two files with minor differences between the two. 我有两个文件,两者之间有微小差异。 I want to output the values that are different so that I can see what changed. 我想输出不同的值,以便可以看到更改的内容。 There are a lot of columns to compare. 有很多要比较的列。
Here's sample data (only difference in example is status on first row): 这是示例数据(示例中唯一的区别是第一行的状态):
Data1 数据1
ID PROGRAM_CODE Status
123 888 Active
123 777 Active
345 777 Inactive
345 999 Active
678 666 Inactive
901 777 Inactive
901 888 Active
Data2 数据2
ID PROGRAM_CODE Status
123 888 BLAH
123 777 Active
345 777 Inactive
345 999 Active
678 666 Inactive
901 777 Inactive
901 888 Active
Desired Output: 所需输出:
ID Status_1 Status_2
123 Active Inactive
My current approach is to create a list of columns, merge the two dataframes, and then use the list of columns in a for loop to compare. 我当前的方法是创建列列表,合并两个数据框,然后在for循环中使用列列表进行比较。 I believe my code is comparing series and outputting the whole series if there is any difference at all. 我相信我的代码会比较系列并输出整个系列(如果有任何区别的话)。 I just want to see the one row with different values. 我只想查看具有不同值的一行。 Also, this doesn't work if one field has a value and it is blank in the other dataframe. 此外,如果一个字段具有值,而另一字段中为空白,则此方法不起作用。
Code: 码:
import pandas as pd
df1 = pd.read_excel(r"P:\data_files\data1.xlsx")
df2 = pd.read_excel(r"P:\data_files\data2.xlsx")
# create list of columns
l1 = list(df1)
# dropping the join values from the list because I don't want to compare those
l1 = [e for e in l1 if e not in ('ID','PROGRAM_CODE')]
# merge dataframes
df3 = df1.merge(df2, how='outer', on=['ID','PROGRAM_CODE'], suffixes=['_1', '_2'])
for x in l1:
if df3[x+'_1'].any() != df3[x+'_2'].any():
print(df3[['ID', x+'_1',x+'_2']])
Output of above code: Shows all values for the Status column even though only the first row has values that are different between data frames. 上面的代码的输出:即使只有第一行的数据帧之间的值不同,也会显示“状态”列的所有值。
ID Status_1 Status_2
123 Active Blah
123 Active Active
345 Inactive Inactive
345 Active Active
678 Inactive Inactive
901 Inactive Inactive
901 Active Active
Edit 12/12/17 The example from Wen below seems to work for one column, but I need to compare every row and column for two files where ID and Program_Code are the same. 编辑12/12/17下面来自Wen的示例似乎适用于一列,但是我需要比较ID和Program_Code相同的两个文件的每一行和每一列。
I tried this loop: 我尝试了以下循环:
for x in l1:
print(df3.groupby('STUDENT_CID').x.apply(list).apply(pd.Series).add_prefix(x+'_'))
but I get the following error: 但我收到以下错误:
AttributeError: 'DataFrameGroupBy' object has no attribute 'x'
I need a way to loop through every column (both files contain the same columns). 我需要一种遍历每一列的方式(两个文件都包含相同的列)。
Additional Example: 附加示例:
Data File 1 数据文件1
ID PROGRAM_CODE I_CODE INSTITUTION TERM TYPE STATUS Hire_Date
123 888 111 ZBD Fall FINAL Active 1/1/2017 0:00
123 777 111 ZBD Fall FINAL Active 1/1/2017 0:00
345 777 125 GUB Fall FINAL Inactive 2/3/2017 0:00
345 999 125 GUB Fall FINAL Inactive 2/3/2017 0:00
678 999 111 ZBD Fall FINAL Active 1/1/2017 0:00
678 888 111 ZBD Fall FINAL Active 1/1/2017 0:00
901 888 654 YUI Fall FINAL Inactive 5/1/2017 0:00
901 777 654 YUI Fall FINAL Inactive 5/1/2017 0:00
Data File2 数据文件2
ID PROGRAM_CODE I_CODE INSTITUTION TERM TYPE STATUS Hire_Date
123 888 111 ZBD Fall FINAL Inactive 1/1/2017 0:00
123 777 111 ZBD Fall FINAL Active 1/1/2017 0:00
345 777 111 ZBD Fall FINAL Inactive 2/3/2017 0:00
345 999 111 ZBD Fall FINAL Inactive 2/3/2017 0:00
678 999 111 ZBD Fall FINAL Active 1/1/2017 0:00
678 888 111 ZBD Fall FINAL Active 1/1/2017 0:00
901 888 654 YUI Fall FINAL Inactive 5/1/2017 0:00
901 777 654 YUI Fall FINAL Inactive 5/1/2017 0:00
Desired Output 期望的输出
ID STATUS_1 STATUS_2
123 Active Inactive
ID INSTITUTION_1 INSTITUTION_2
345 125 111
We using pd.concat
+ drop_duplicates
我们使用pd.concat
+ drop_duplicates
df1=pd.concat([df1,df2]).drop_duplicates(keep=False)
df1
Out[1085]:
ID PROGRAM_CODE Status
0 123 888 Active
0 123 888 BLAH
Then we groupby
create the table you need 然后我们groupby
创建您需要的表
df1.groupby('ID').Status.apply(list).apply(pd.Series).add_prefix('Status_')
Out[1094]:
Status_0 Status_1
ID
123 Active BLAH
Updated 更新
df=pd.concat([df1,df2]).drop_duplicates(keep=False)
dd=df.groupby('ID').agg(lambda x:sorted(set(x), key=list(x).index)).stack()
dd[dd.apply(len)>1]
Out[1194]:
ID
123 STATUS [Active, Inactive]
345 PROGRAM_CODE [777, 999]
I_CODE [125, 111]
INSTITUTION [GUB, ZBD]
I'm sure there are better ways to do it, but have you tried merging the dataframes (as you already are), creating a new column that compares Status_1 and Status_2, and then dropping any rows where that match is True? 我确定有更好的方法来执行此操作,但是您是否尝试过合并数据框(如您现有的那样),创建一个比较Status_1和Status_2的新列,然后删除匹配项为True的任何行? If you dropped that "do they match" column afterwards, I believe you'll wind up with your desired output. 如果您之后删除了“它们是否匹配”列,我相信您会获得理想的输出。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.