简体   繁体   English

Apache Spark 中的 join 和 cogroup 有什么区别

[英]What's the difference between join and cogroup in Apache Spark

What's the difference between join and cogroup in Apache Spark? Apache Spark 中的 join 和 cogroup 有什么区别? What's the use case for each method?每种方法的用例是什么?

Let me help you to clarify them, both are common to use and important!让我来帮助您澄清它们,两者都很常用且很重要!

def join[W](other: RDD[(K, W)]): RDD[(K, (V, W))]

This is prototype of join, please carefully look at it .这是join的prototype请仔细看 For example,例如,

val rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)
val rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)
 
scala> rdd1.join(rdd2).collect
res0: Array[(String, (String, String))] = Array((A,(1,a)), (C,(3,c)))

All keys that will appear in the final result is common to rdd1 and rdd2.将出现在最终结果中的所有键对于rdd1 和 rdd2都是通用的 This is similar to relation database operation INNER JOIN .这类似于relation database operation INNER JOIN

But cogroup is different ,但是cogroup不同

def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))]

as one key at least appear in either of the two rdds, it will appear in the final result, let me clarify it:由于一个key至少出现在两个rdd中的任何一个中,它会出现在最终结果中,让我澄清一下:

val rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)
val rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)

scala> var rdd3 = rdd1.cogroup(rdd2).collect
res0: Array[(String, (Iterable[String], Iterable[String]))] = Array(
(B,(CompactBuffer(2),CompactBuffer())), 
(D,(CompactBuffer(),CompactBuffer(d))), 
(A,(CompactBuffer(1),CompactBuffer(a))), 
(C,(CompactBuffer(3),CompactBuffer(c)))
)

This is very similar to relation database operation FULL OUTER JOIN , but instead of flattening the result per line per record, it will give you the iterable interface to you , the following operation is up to you as convenient !similar relation database operation FULL OUTER JOIN非常similar ,但不是将每条记录的每行结果展平,而是为您提供iterable interface ,以下操作由您决定,方便

Good Luck!祝你好运!

Spark docs is: http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions Spark 文档是: http : //spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 泛型RDD上的Apache Spark Join / Cogroup - Apache Spark join/cogroup on generic type RDD saveAsObjectFile 和persist in apache spark有什么区别? - What is the difference between saveAsObjectFile and persist in apache spark? Apache Spark计算和切片之间有什么区别? - What is the difference between Apache Spark compute and slice? Apache Spark RDD的union和zipPartition有什么区别? - What is the difference between union and zipPartitions for Apache Spark RDDs? 在Apache Spark协作组中,如何确保不移动大于2个操作数的1个RDD? - In Apache Spark cogroup, how to make sure 1 RDD of >2 operands is not moved? Apache Flink使用coGroup实现左外连接 - Apache Flink using coGroup to achieve left-outer join 在Apache spark中,使用mapPartitions和组合使用广播变量和map之间的区别是什么 - In Apache spark, what is the difference between using mapPartitions and combine use of broadcast variable and map 在Apache Spark的Scala API中,使用单引号和$“表示法之间有区别吗? - In Apache Spark's Scala API, is there a difference between using one single quote and $“” notation? spark中的转换和rdd函数有什么区别? - What is difference between transformations and rdd functions in spark? 在火花流中,foreach和foreachRDD之间有什么区别 - In spark streaming, what is the difference between foreach and foreachRDD
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM