简体   繁体   English

如何用熊猫从2个CSV文件中删除所有重复的行?

[英]How to remove all duplicated rows from 2 CSV files with pandas?

I have to CSV files. 我必须要CSV文件。 Data structures are equal and looks like ip, cve. 数据结构相等,看起来像ip,cve。 I need to remove all rows, which are present in both files and leave only unique rows. 我需要删除所有存在于两个文件中的行,并仅保留唯一的行。 (Left anti join) I think, that this can be done with left join, but it doesn't work. (左反连接)我认为,这可以通过左连接来完成,但是不起作用。 Is there easier way to solve such problem? 有没有更简单的方法来解决此类问题?

    import pandas as pd

    patrol = pd.read_csv('parse_results_MaxPatrol.csv')
    nessus = pd.read_csv('parse_result_nessus_new.csv')
    nessus_filtered = nessus.merge(patrol, how='left', left_on=[0], right_on=[0])

This code throws such traceback: 这段代码抛出了这样的回溯:

File "C:/Users/username/Desktop/pandas/parser.py", line 6, in <module>
    nessus_filtered = nessus.merge(patrol, how='left', left_on=[0], right_on=[0])
  File "C:\Python37\lib\site-packages\pandas\core\frame.py", line 6868, in merge
    copy=copy, indicator=indicator, validate=validate)
  File "C:\Python37\lib\site-packages\pandas\core\reshape\merge.py", line 47, in merge
    validate=validate)
  File "C:\Python37\lib\site-packages\pandas\core\reshape\merge.py", line 529, in __init__
    self.join_names) = self._get_merge_keys()
  File "C:\Python37\lib\site-packages\pandas\core\reshape\merge.py", line 833, in _get_merge_keys
    right._get_label_or_level_values(rk))
  File "C:\Python37\lib\site-packages\pandas\core\generic.py", line 1706, in _get_label_or_level_values
    raise KeyError(key)

You can learn it from below given sample code 您可以从下面的示例代码中学习

import pandas as pd
data_a = pd.read_csv('./a.csv')
data_b = pd.read_csv('./b.csv')
print('Data A')
print(data_a)
print('\nData B')
print(data_b)

data_c = pd.concat([data_a, data_b]).drop_duplicates(keep='first')
print('\nData C - Final dataset')
print(data_c)

It read two sample .csv files (a.csv and b.csv) which both having same structure (id, name columns) with few duplicate values. 它读取两个示例.csv文件(a.csv和b.csv),它们都具有相同的结构(id,名称列),几乎没有重复值。 We just read these .csv files and drop the duplicates and keep the first row. 我们只是读取这些.csv文件,然后删除重复的文件并保留第一行。

Data A
   id   name
0   1   Jhon
1   2   Kane
2   3    Leo
3   4  Brack

Data B
   id   name
0   2   Kane
1   4  Brack
2   5  Peter
3   6    Tom

Data C - Final dataset
   id   name
0   1   Jhon
1   2   Kane
2   3    Leo
3   4  Brack
2   5  Peter
3   6    Tom

Hope, this help you to solve your problem. 希望这可以帮助您解决问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM