简体   繁体   English

如何用 reducebykey 替换 distinct()

[英]how to replace distinct() with reducebykey

I have a scenario where the below code overall take more than 10 hours for >2 Billion records.我有一个场景,下面的代码总共需要 10 多个小时才能获得超过 20 亿条记录。 even i tried with 35 instance of the i3 cluster but still the performance was bad.即使我尝试了 35 个 i3 集群实例,但性能仍然很差。 I am looking for an option to replace distinct() with reduceByKey() and to get suggestion to improve the performance...我正在寻找一种用 reduceByKey() 替换 distinct() 的选项,并获得提高性能的建议......

    val df = spark.read.parquet(out)
      
     val df1 = df.
    select($"ID", $"col2", $"suffix",
   $"date", $"year", $"codes")

   val df2 = df1.
    repartition(
      List(col("ID"), col("col2"), col("suffix"), col("date"),
        col("year"), col("codes")): _*
    ).distinct()
      
       val df3 = df2.withColumn("codes", expr("transform(codes, (c,s) -> (d,s) )"))
    
       df3.createOrReplaceTempView("df3")
    
       val df4 = spark.sql(
         """SELECT
               ID, col2, suffix
               d.s as seq,
               d.c as code,
               year,date
               FROM
                df3
                 LATERAL VIEW explode(codes) exploded_table as d
                 """)
    
       df4.
         repartition(
           600,
           List(col("year"), col("date")): _*).
         write.
         mode("overwrite").
         partitionBy("year", "date").
         save(OutDir)

I think distinct() is implemented with reduceByKey (reduce), but if you want to implement it by yourself, you could do something我认为distinct()是用reduceByKey (reduce) 实现的,但是如果你想自己实现它,你可以做点什么

val array=List((1,2),(1,3),(1,5),(1,2),(2,2),(2,2),(3,2),(3,2),(4,1),(1,3))
val pairRDD=session.sparkContext.parallelize(array)
val distinctResult=pairRDD.map(x => (x, null)).reduceByKey((x, _) => x)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM