简体   繁体   中英

Make RDD from List in scala&spark

Orgin data

ID, NAME, SEQ, NUMBER
A, John, 1, 3
A, Bob, 2, 5
A, Sam, 3, 1
B, Kim, 1, 4
B, John, 2, 3
B, Ria, 3, 5

To mak ID group list, I did below

val MapRDD = originDF.map { x => (x.getAs[String](colMap.ID), List(x)) }
val ListRDD = MapRDD.reduceByKey { (a: List[Row], b: List[Row]) => List(a, b).flatten }

My goal is making this RDD (purpose is to find SEQ-1's NAME and Number diff in each ID group)

ID, NAME, SEQ, NUMBER, PRE_NAME, DIFF
A, John, 1, 3, NULL, NULL
A, Bob, 2, 5, John, 2
A, Sam, 3, 1, Bob, -4
B, Kim, 1, 4, NULL, NULL
B, John, 2, 3, Kim, -1
B, Ria, 3, 5, John, 2

Currently ListRDD would be like

A, ([A,Jone,1,3], [A,Bob,2,5], ..)
B, ([B,Kim,1,4], [B,John,2,3], ..)

This is code I tried to make my goal RDD with ListRDD (not working as I want)

  def myFunction(ListRDD: RDD[(String, List[Row])]) = {
    var rows: List[Row] = Nil
    ListRDD.foreach( row => { 
        rows ::: make(row._2)
    })
    //rows has nothing and It's not RDD
  }

  def make( eachList: List[Row]): List[Row] = {
      caseList.foreach { x => //... Make PRE_NAME and DIFF in new List
  }

My final goal is to save this RDD in csv (RDD.saveAsFile...). How to make this RDD(not list) with this data.

Window functions look like a good fit here:

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.lag

val df = sc.parallelize(Seq(
    ("A", "John", 1, 3),
    ("A", "Bob", 2, 5),
    ("A", "Sam", 3, 1),
    ("B", "Kim", 1, 4),
    ("B", "John", 2, 3),
    ("B", "Ria", 3, 5))).toDF("ID", "NAME", "SEQ", "NUMBER")

val w = Window.partitionBy($"ID").orderBy($"SEQ")

df.select($"*",
  lag($"NAME", 1).over(w).alias("PREV_NAME"),
  ($"NUMBER" - lag($"NUMBER", 1).over(w)).alias("DIFF"))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM