简体   繁体   中英

How to drop rows with duplicates column values and the number of columns are not always fixed?

I have a dataframe and columns in that dataframe could be any number(2-50). for example it is 2 columns as below. I want to remove rows where site1 and site2 are same.

df = pd.DataFrame([[507814, 501972], [529389, 529389], [508110, 508161]], columns = ['site1', 'site2'])

整个数据框

I want to drop rows with similar column values as below Expected Output:

在此处输入图片说明

df[df["site1"] != df["site2"]]

This can be done this this line, but as I do not have fix number of column and this piece is inside of loop i need a fastest way to do this

I appreciate the help in advance.

Thanks.

If you have more columns, you can use set() + len() :

x = df[~df.apply(lambda x: len(set(x)), axis=1).eq(1)]
print(x)

Prints:

    site1   site2
0  507814  501972
2  508110  508161

Edit: To specify columns:

x = df[~df[["site1", "site2"]].apply(lambda x: len(set(x)), axis=1).eq(1)]
print(x)

Prints:

    site1   site2   site3
0  507814  501972  508284
2  508110  508161  508098

df used:

    site1   site2   site3
0  507814  501972  508284
1  529389  529389  508284
2  508110  508161  508098

你可以这样做:

df = df[df.nunique(axis=1) > 1]

Here is another way. This should work if all your site values are numbers.

df.loc[df.diff(axis=1).sum(axis=1).ne(0)]

Using your example, this filters the columns where site1 == site2 :

# first option
df[~df.apply(lambda x: x["site1"] == x["site2"], axis=1)]

# second option
df.query("site1 != site2")

All options give you:

    site1   site2
0   507814  501972
2   508110  508161

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM