简体   繁体   中英

Convert multiple columns in pyspark dataframe into one dictionary

I have created a PySpark DataFrame like this one: '''

df = spark.createDataFrame([
    ('v', 3, 'a'),
    ('d', 2, 'b'),
    ('q', 9, 'c')],
    ["c1", "c2", "c3"]
)

df.show()

c1  | c2 | c3
v   |  3 | a
d   |  2 | b
q   |  9 | c

I want to create a new column like this:

+--------------------------+
|           c4             |
+--------------------------+
|{"c1":"v","c2":3,"c3":"a"}|
|{"c1":"d","c2":2,"c3":"b"}|
|{"c1":"q","c2":9,"c3":"c"}|
+--------------------------+

I want c4 to be in type of MapType , not StringType Also, I want to keep the type of values as it is. (keep 3,2 and 9 as integers, not String)

Use struct + to_json like this if you want to get JSON strings:

import pyspark.sql.functions as F

df1 = df.select(
    F.to_json(
        F.struct(*[F.col(c) for c in df.columns])
    ).alias("c4")
)

df1.show(truncate=False)
#+--------------------------+
#|c4                        |
#+--------------------------+
#|{"c1":"v","c2":3,"c3":"a"}|
#|{"c1":"d","c2":2,"c3":"b"}|
#|{"c1":"q","c2":9,"c3":"c"}|
#+--------------------------+

EDIT

If you want a MapType column use create_map function:

from itertools import chain

df1 = df.select(
    F.create_map(
        *list(chain(*[[F.col(c), F.lit(c)] for c in df.columns]))
    ).alias("c4")
)

#+---------------------------+
#|c4                         |
#+---------------------------+
#|{v -> c1, 3 -> c2, a -> c3}|
#|{d -> c1, 2 -> c2, b -> c3}|
#|{q -> c1, 9 -> c2, c -> c3}|
#+---------------------------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM