简体   繁体   中英

Which function in spark is used to combine two RDDs by keys

Let us say I have the following two RDDs, with the following key-pair values.

rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

and

rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

Now, I want to join them by key values, so for example I want to return the following

ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3, value4, value7]) ] 

How I can I do this, in spark using Python or Scala? One way is to use join, but join would create a tuple inside the tuple. But I want to only have one tuple per key value pair.

只需使用join然后map生成的 rdd。

rdd1.join(rdd2).map(case (k, (ls, rs)) => (k, ls ++ rs))

我会将两个 RDD 合并到一个 reduceByKey 来合并这些值。

(rdd1 union rdd2).reduceByKey(_ ++ _)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM