简体   繁体   中英

SPARK: dropDuplicates in every partitions only

I want to dropDuplicates in every partitions, not the full DataFrame.

Is that possible with PySpark? Thanks.

import pyspark.sql.functions as f
withNoDuplicates = df.withColumn("partitionID", f.spark_partition_id()).dropDuplicates()

Basically you add a column of the partition id using spark_partition_id and then do the distinct, it will consider different partitions separately

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM