簡體   English   中英

Scala -Spark-迭代連接對RDD

[英]Scala -Spark - Iterate over joined pair RDD

我正在嘗試加入2 PairRDD,但不確定如何迭代結果。

val input1 = sc.textFile(inputFile1)
val input2 = sc.textFile(inputFile2)

val pairs = input1.map(x => (x.split("\\|")(18),x))
val groupPairs = pairs.groupByKey()

val staPairs = input2.map(y => (y.split("\\|")(0),y))
val stagroupPairs = staPairs.groupByKey()

val finalJoined = groupPairs.leftOuterJoin(stagroupPairs)

finalJoined的類型為finalJoined:

org.apache.spark.rdd.RDD[(String, (Iterable[String], Option[Iterable[String]]))]

當我執行finalJoined.collect().foreach(println)我看到以下輸出:

(key1,(CompactBuffer(val1a,val1b),Some(CompactBuffer(val1)))
(key2,(CompactBuffer(val2a,val2b),Some(CompactBuffer(val2)))

我希望輸出是

對於key1

val1a+"|"+val1

val1b+"|"+val1

對於key2

val2a+"|"+val2

避免在rdds上同時使用groupByKey並直接對和星對執行連接。您將獲得所需的結果。

例如

    val rdd1 = sc.parallelize(Array("key1,val1a","key1,val1b","key2,val2a","key2,val2b").toSeq)
 val rdd2 = sc.parallelize(Array("key1,val1","key2,val2").toSeq)
 val pairs= rdd1.map(_.split(",")).map(x => (x(0),x(1)))
val starPairs= rdd2.map(_.split(",")).map(x => (x(0),x(1)))
val res = pairs.join(starPairs)
res.foreach(println)

(key1,(val1a,val1))
(key1,(val1b,val1))
(key2,(val2a,val2))
(key2,(val2b,val2))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM