[英]How to find duplicates in pandas?
I've a data frame of about 52000 rows with some duplicates, when I use当我使用时,我有一个大约 52000 行的数据框,其中有一些重复项
df_drop_duplicates()
I loose about 1000 rows, but I don't want to erase this rows I want to know which ones are the duplicates rows我丢失了大约 1000 行,但我不想删除这些行我想知道哪些是重复行
You could use duplicated<\/code><\/a> for that:
您可以为此使用
duplicated<\/code><\/a>的:
df[df.duplicated()]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.