I have a dataframe containing only one column which has elements of the type MapType(StringType(), IntegerType())
. I would like to obtain the cumulative sum of that column, where the sum
operation would mean adding two dictionaries.
Minimal example
a = [{'Maps': ({'a': 1, 'b': 2, 'c': 3})}, {'Maps': ({'a': 2, 'b': 4, 'd': 6})}]
df = spark.createDataFrame(a)
df.show(5, False)
+---------------------------+
|Maps |
+---------------------------+
|Map(a -> 1, b -> 2, c -> 3)|
|Map(a -> 2, b -> 4, d -> 6)|
+---------------------------+
If I were to obtain the cumulative sum of the column Maps
, I should get the following result.
+-----------------------------------+
|Maps |
+-----------------------------------+
|Map(a -> 3, b -> 6, c -> 3, d -> 6)|
+-----------------------------------+
PS I am using Python 2.6, so collections.Counter
is not available. I can probably install it if absolutely necessary.
My attempts:
I have tried an accumulator
based approach and an approach that uses fold
.
Accumulator
def addDictFun(x):
global v
v += x
class DictAccumulatorParam(AccumulatorParam):
def zero(self, d):
return d
def addInPlace(self, d1, d2):
for k in d1:
d1[k] = d1[k] + (d2[k] if k in d2 else 0)
for k in d2:
if k not in d1:
d1[k] = d2[k]
return d1
v = sc.accumulator(MapType(StringType(), IntegerType()), DictAccumulatorParam())
cumsum_dict = df.rdd.foreach(addDictFun)
Now at the end, I should have the resulting dictionary in v
. Instead, I get the error MapType
is not iterable (mostly on the line for k in d1
in the function addInPlace
).
rdd.fold
The rdd.fold
based approach is as follows:
def add_dicts(d1, d2):
for k in d1:
d1[k] = d1[k] + (d2[k] if k in d2 else 0)
for k in d2:
if k not in d1:
d1[k] = d2[k]
return d1
cumsum_dict = df.rdd.fold(MapType(StringType(), IntegerType()), add_dicts)
However, I get the same MapType is not iterable
error here. Any idea where I am going wrong?
pyspark.sql.types
are schema descriptors, not collections or external language representations so cannot be used with fold
or Accumulator
.
The most straightforward solution is to explode
and aggregate
from pyspark.sql.functions import explode
df = spark.createDataFrame(
[{'a': 1, 'b': 2, 'c': 3}, {'a': 2, 'b': 4, 'd': 6}],
"map<string,integer>"
).toDF("Maps")
df.select(explode("Maps")).groupBy("key").sum("value").rdd.collectAsMap()
# {'d': 6, 'c': 3, 'b': 6, 'a': 3}
With RDD
you can do a similar thing:
from operator import add
df.rdd.flatMap(lambda row: row.Maps.items()).reduceByKey(add).collectAsMap()
# {'b': 6, 'c': 3, 'a': 3, 'd': 6}
or if you really want fold
from operator import attrgetter
from collections import defaultdict
def merge(acc, d):
for k in d:
acc[k] += d[k]
return acc
df.rdd.map(attrgetter("Maps")).fold(defaultdict(int), merge)
# defaultdict(int, {'a': 3, 'b': 6, 'c': 3, 'd': 6})
@user8371915's answer using explode
is more generic, but here's another approach that may be faster if you knew the keys ahead of time:
import pyspark.sql.functions as f
myKeys = ['a', 'b', 'c', 'd']
df.select(*[f.sum(f.col('Maps').getItem(k)).alias(k) for k in myKeys]).show()
#+---+---+---+---+
#| a| b| c| d|
#+---+---+---+---+
#| 3| 6| 3| 6|
#+---+---+---+---+
And if you wanted the result in a MapType()
, you could use pyspark.sql.functions.create_map
like:
from itertools import chain
df.select(
f.create_map(
list(
chain.from_iterable(
[[f.lit(k), f.sum(f.col('Maps').getItem(k))] for k in myKeys]
)
)
).alias("Maps")
).show(truncate=False)
#+-----------------------------------+
#|Maps |
#+-----------------------------------+
#|Map(a -> 3, b -> 6, c -> 3, d -> 6)|
#+-----------------------------------+
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.