繁体   English   中英

如何通过 Python 中的 pandas 库删除 csv 中的重复项?

[英]How to drop duplicates in csv by pandas library in Python?

我一直在四处寻找示例,但无法按我想要的方式工作。

我想通过“OrderID”进行重复数据删除并提取重复项以分隔 CSV。 主要的是我需要能够更改我想要重复数据删除的列,在这种情况下是“订单 ID”。

示例数据集:

 ID Fruit Order ID Quantity Price 1 apple 1111 11 £2.00 2 banana 2222 22 £3.00 3 orange 3333 33 £5.00 4 mango 4444 44 £7.00 5 Kiwi 3333 55 £5.00

Output:

 ID Fruit Order ID Quantity Price 5 Kiwi 3333 55 £5.00

我试过这个:

import pandas as pd

df = pd.read_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate example.csv')

new_df = df[['ID','Fruit','Order ID','Quantity','Price']].drop_duplicates()

new_df.to_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate test.csv', index=False)

我遇到的问题是它不会删除任何重复项。

您可以通过使用 value_counts()、合并和过滤创建新的 dataframe 来实现此目的。

# value_counts returns a Series, to_frame() makes it into DataFrame
df_counts = df['OrderID'].value_counts().to_frame()
# rename the column
df_counts.columns = ['order_counts']

# merging original on column "OrderID" and the counts by it's index
df_merged = pd.merge(df, df_counts, left_on='OrderID', right_index=True)

# Then to get the ones which are duplicate is just the ones that count is higher than 1
df_filtered = df_merged[df_merged['order_counts']>1]

# if you want everything else that isn't a duplicate
df_not_duplicates = df_merged[df_merged['order_counts']==1]

编辑: drop_duplicates()仅保留唯一值,但如果发现重复值,它将删除除 one 之外的所有值 哪个让你通过参数“keep”设置它,可以是“第一个”或“最后一个”

编辑2:从您的评论中,您希望将结果导出到 csv。 请记住,我在上面所做的方式已分成 2 个 DataFrame:

a) 删除了重复项的所有项目 (df_not_duplicates)

b)只有重复的项目仍然重复(df_filtered)

# Type 1 saving all OrderIds that had duplicates but still with duplicates:
df_filtered.to_csv("path_to_my_csv//filename.csv", sep=",", encoding="utf-8")

# Type 2, all OrderIDs that had duplicate values, but only 1 line per OrderID
df_filtered.drop_duplicates(subset="OrderID", keep='last').to_csv("path_to_my_csv//filename.csv", sep=",", encoding="utf-8")

如果您想使用 drop_duplicates 方法,错误在第二行代码中(您应该使用 pd.DataFrame)。

df = pd.read_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicateexample.csv')

# Create dataframe with duplicates
raw_data = {'ID': [1,2,3,4,5], 
            'Fruit': ['apple', 'Banana', 'Orange','Mango', 'Kiwi'], 
            'Order ID': [1111, 2222, 3333, 4444, 5555], 
        'Quantity': [11, 22, 33, 44, 55],
        'Price': [ 2, 3, 5, 7, 5]}

new_df = pd.DataFrame(raw_data, columns = ['ID','Fruit','Order ID','Quantity','Price']).drop_duplicates()

new_df.to_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate test.csv', index=False)

希望能帮助到你。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM