繁体   English   中英

如何在Spark Scala中合并多个DataFrame进行高效的完全外部联接

[英]How to Merge Join Multiple DataFrames in Spark Scala Efficient Full Outer Join

如何有效地合并/加入多个Spark DataFrame(Scala)? 我想加入一个所有表共有的列,下面是“日期”,并因此得到(某种)稀疏数组。

Data Set A:
Date    Col A1   Col A2
-----------------------
1/1/16  A11      A21
1/2/16  A12      A22
1/3/16  A13      A23
1/4/16  A14      A24
1/5/16  A15      A25

Data Set B:
Date    Col B1   Col B2
-----------------------
1/1/16  B11      B21
1/3/16  B13      B23
1/5/16  B15      B25

Data Set C:
Date    Col C1   Col C2
-----------------------
1/2/16  C12      C22
1/3/16  C13      C23
1/4/16  C14      C24
1/5/16  C15      C25

Expected Result Set:
Date    Col A1   Col A2  Col B1  Col B2  Col C1  Col C2
---------------------------------------------------------
1/1/16  A11      A21     B11     B12
1/2/16  A12      A22                     C12     C22
1/3/16  A13      A23     B13     B23     C13     C23
1/4/16  A14      A24                     C14     C24
1/5/16  A15      A25     B15     B25     C15     C25

感觉就像是多个表上的完全外部联接,但我不确定。 在DataFrames上没有Join方法的情况下,是否有一些更简单/更有效的方法来访问此稀疏数组?

这是一个过时的文章,所以我不确定是否仍在调优OP。无论如何,一种简单的方法来达到期望的结果是通过cogroup() 将每个RDD转换为以日期为键的[K,V] RDD ,然后使用cogroup。 这是一个例子:

def mergeFrames(sc: SparkContext, sqlContext: SQLContext) = {

import sqlContext.implicits._

// Create three dataframes. All string types assumed.
val dfa = sc.parallelize(Seq(A("1/1/16", "A11", "A21"),
  A("1/2/16", "A12", "A22"),
  A("1/3/16", "A13", "A23"),
  A("1/4/16", "A14", "A24"),
  A("1/5/16", "A15", "A25"))).toDF()

val dfb = sc.parallelize(Seq(
  B("1/1/16", "B11", "B21"),
  B("1/3/16", "B13", "B23"),
  B("1/5/16", "B15", "B25"))).toDF()

val dfc = sc.parallelize(Seq(
  C("1/2/16", "C12", "C22"),
  C("1/3/16", "C13", "C23"),
  C("1/4/16", "C14", "C24"),
  C("1/5/16", "C15", "C25"))).toDF()

val rdda = dfa.rdd.map(row => row(0) -> row.toSeq.drop(1))
val rddb = dfb.rdd.map(row => row(0) -> row.toSeq.drop(1))
val rddc = dfc.rdd.map(row => row(0) -> row.toSeq.drop(1))

val schema = StructType("date a1 a2 b1 b2 c1 c2".split(" ").map(fieldName => StructField(fieldName, StringType)))

// Form cogroups. `date` is assumed to be a key so there's at most one row for each date in an rdd/df
val cg: RDD[Row] = rdda.cogroup(rddb, rddc).map { case (k, (v1, v2, v3)) =>
  val cols = Seq(k) ++
    (if (v1.nonEmpty) v1.head else Seq(null, null)) ++
    (if (v2.nonEmpty) v2.head else Seq(null, null)) ++
    (if (v3.nonEmpty) v3.head else Seq(null, null))
  Row.fromSeq(cols)
}

// Turn RDD back to DataFrame
val cgdf = sqlContext.createDataFrame(cg, schema).sort("date")

cgdf.show }

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM