简体   繁体   中英

Spark: use reduceByKey instead of groupByKey and mapByValues

I have an RDD with duplicates values with the following format:

[ {key1: A}, {key1: A}, {key1: B}, {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..]

I would like the new RDD to have the following output and to get ride of duplicates.

[ {key1: [A,B,C]}, {key2: [B,D]}, ..]

I have manage to do this with the following code by putting the values in a set to get ride of duplicates.

RDD_unique = RDD_duplicates.groupByKey().mapValues(lambda x: set(x))

But I am trying to achieve this more elegantly in 1 command with

RDD_unique = RDD_duplicates.reduceByKey(...)

I have not managed to come up with a lambda function that gets me the same result in the reduceByKey function.

You can do it like this:

data = (sc.parallelize([ {key1: A}, {key1: A}, {key1: B},
  {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..]))

result = (data
  .mapValues(lambda x: {x})
  .reduceByKey(lambda s1, s2: s1.union(s2)))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM