簡體   English   中英

Spark Best方式groupByKey,orderBy和filter

[英]Spark Best way groupByKey, orderBy and filter

我有50GB的數據與這個模式[ID,timestamp,countryId],我希望通過使用spark 2.2.1按時間戳排序的所有事件中每個人的每次“更改”。 我的意思是如果我有這個事件:

1,20180101,2
1,20180102,3
1,20180105,3
2,20180105,3
1,20180108,4
1,20180109,3
2,20180108,3
2,20180109,6

我想得到這個:

1,20180101,2
1,20180102,3
1,20180108,4
1,20180109,3
2,20180105,3
2,20180109,6

為此我開發了這段代碼:

val eventsOrdened = eventsDataFrame.orderBy("ID", "timestamp")

val grouped = eventsOrdened
  .rdd.map(x => (x.getString(0), x))
  .groupByKey(300)
  .mapValues(y => cleanEvents(y))
  .flatMap(_._2)

其中“cleanEvents”是:

def cleanEvents(ordenedEvents: Iterable[Row]): Iterable[Row] = {

val ordered = ordenedEvents.toList

val cleanedList: ListBuffer[Row] = ListBuffer.empty[Row]

ordered.map {
  x => {

    val next = if (ordered.indexOf(x) != ordered.length - 1) ordered(ordered.indexOf(x) + 1) else x
    val country = x.get(2)
    val nextountry = next.get(2)
    val isFirst = if (cleanedList.isEmpty) true else false
    val isLast = if (ordered.indexOf(x) == ordered.length - 1) true else false

    if (isFirst) {
      cleanedList.append(x)
    } else {
      if (cleanedList.size >= 1 && cleanedList.last.get(2) != country && country != nextCountry) {
        cleanedList.append(x)
      } else {
        if (isLast && cleanedList.last.get(2) != zipCode) cleanedList.append(x)
      }
    }

  }
}
cleanedList
}

它的工作原理但速度太慢,歡迎任何優化!

謝謝!

可以使用窗口函數“滯后”:

  case class Details(id: Int, date: Int, cc: Int)
  val list = List[Details](
  Details(1, 20180101, 2),
  Details(1, 20180102, 3),
  Details(1, 20180105, 3),
  Details(2, 20180105, 3),
  Details(1, 20180108, 4),
  Details(1, 20180109, 3),
  Details(2, 20180108, 3),
  Details(2, 20180109, 6))
val ds = list.toDS()
// action 
val window = Window.partitionBy("id").orderBy("date")
val result = ds.withColumn("lag", lag($"cc", 1).over(window)).where(isnull($"lag") || $"lag" =!= $"cc").orderBy("id", "date")
result.show(false)

結果是(滯后列可以刪除):

|id |date    |cc |lag |
+---+--------+---+----+
|1  |20180101|2  |null|
|1  |20180102|3  |2   |
|1  |20180108|4  |3   |
|1  |20180109|3  |4   |
|2  |20180105|3  |null|
|2  |20180109|6  |3   |
+---+--------+---+----+

您可能想嘗試以下操作:

  1. 二次排序。 它是低級分區和排序,您將創建自定義分區。 更多信息: http//codingjunkie.net/spark-secondary-sort/

  2. 使用combineByKey

     case class Details(id: Int, date: Int, cc: Int) val sc = new SparkContext("local[*]", "App") val list = List[Details]( Details(1,20180101,2), Details(1,20180102,3), Details(1,20180105,3), Details(2,20180105,3), Details(1,20180108,4), Details(1,20180109,3), Details(2,20180108,3), Details(2,20180109,6)) val rdd = sc.parallelize(list) val createCombiner = (v: (Int, Int)) => List[(Int, Int)](v) val combiner = (c: List[(Int, Int)], v: (Int, Int)) => (c :+ v).sortBy(_._1) val mergeCombiner = (c1: List[(Int, Int)], c2: List[(Int, Int)]) => (c1 ++ c2).sortBy(_._1) rdd .map(det => (det.id, (det.date, det.cc))) .combineByKey(createCombiner, combiner, mergeCombiner) .collect() .foreach(println) 

輸出將是這樣的:

(1,List((20180101,2), (20180102,3), (20180105,3), (20180108,4), (20180109,3)))
(2,List((20180105,3), (20180108,3), (20180109,6)))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM