简体   繁体   中英

Spark DataFrame - How to partition the data based on condition

Have some employee data set. in that i need to partition based employee salary based on some condition. Created DataFrame and converted to Custom DataFrame Object. Created Custom Partition for salary.

class SalaryPartition(override val numPartitions: Int) extends Partitioner {

  override def getPartition(key: Any): Int =
    {
      import com.csc.emp.spark.tutorial.PartitonObj._
      key.asInstanceOf[Emp].EMPLOYEE_ID match {
        case salary if salary < 10000 => 1
        case salary if salary >= 10001 && salary < 20000 => 2
        case _ => 3
      }

    }

}

Question how can i invoke\\call my custome partition. Couldn't find partitionBy in dataframe. Have any alternative way?

Just code for my comment:

val empDS = List(Emp(5, 1000), Emp(4, 15000), Emp(3, 30000), Emp(2, 2000)).toDS()
println(s"Original partitions number: ${empDS.rdd.partitions.size}")
println("-- Original partition: data --")
empDS.rdd.mapPartitionsWithIndex((index, it) => {
  it.foreach(r => println(s"Partition $index: $r")); it
}).count()

val getSalaryGrade = (salary: Int) => salary match {
  case salary if salary < 10000 => 1
  case salary if salary >= 10001 && salary < 20000 => 2
  case _ => 3
}
val getSalaryGradeUDF = udf(getSalaryGrade)
val salaryGraded = empDS.withColumn("salaryGrade", getSalaryGradeUDF($"salary"))

val repartitioned = salaryGraded.repartition($"salaryGrade")
println
println(s"Partitions number after: ${repartitioned.rdd.partitions.size}")
println("-- Reparitioned partition: data --")

repartitioned.as[Emp].rdd.mapPartitionsWithIndex((index, it) => {
  it.foreach(r => println(s"Partition $index: $r")); it
}).count()

Output is:

Original partitions number: 2
-- Original partition: data --
Partition 1: Emp(3,30000)
Partition 0: Emp(5,1000)
Partition 1: Emp(2,2000)
Partition 0: Emp(4,15000)

Partitions number after: 5
-- Reparitioned partition: data --
Partition 1: Emp(3,30000)
Partition 3: Emp(5,1000)
Partition 3: Emp(2,2000)
Partition 4: Emp(4,15000)

Note: guess, several partitions possible with the same "salaryGrade".

Advice: "groupBy" or similar looks like more reliable solution.

For stay with Dataset entities, "groupByKey" can be used:

empDS.groupByKey(x => getSalaryGrade(x.salary)).mapGroups((index, it) => {
  it.foreach(r => println(s"Group $index: $r")); index
}).count()

Output:

Group 1: Emp(5,1000)
Group 3: Emp(3,30000)
Group 1: Emp(2,2000)
Group 2: Emp(4,15000)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM