简体   繁体   中英

I can't figure out why I can't remove duplicates from a Pandas df

I am trying to update a Pandas Dataframe with data from an API and have it written to .csv, I need to be sure it does not contain duplicate rows.

I have been checking on here to see what the problem might be (for example forgetting to add inplace=True), but this doesn't seem to be the case.

So... I have pandas read the csv

df = pd.read_csv(file)

Then I download some more data from the API (I ensured I had duplicate lines) and create df2 (the csv was written by the same code so I am sure that a duplicate line is exactly the same). Now I need to append a dataframe to the other and then drop the duplicates:

df = df.append(df2, ignore_index=True)
df.drop_duplicates(subset=None, keep='first', inplace=True)

then I tried

df = df.drop_duplicates()

I would expect not to see any duplicate row with both, but I must be missing something as those are still there and I can't figure out why. I did check if someone else's question was addressing this, but I noticed how the problem is normally missing the inplace=True part... which I didn't.

这是你需要的吗?

df.drop_duplicates(keep=False)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM