简体   繁体   中英

Create multidict from pyspark dataframe

I am new to pyspark and want to create a dictionary from a pyspark dataframe. I do have a working pandas code but I need an equivalent command in pyspark and somehow I am not able to figure out how to do it.

df = spark.createDataFrame([
(11, 101, 5.9),
(11, 102, 5.4),
(22, 111, 5.2),
(22, 112, 5.9),
(22, 101, 5.7),
(33, 101, 5.2),
(44, 102, 5.3),
], ['user_id', 'team_id', 'height'])
df = df.select(['user_id', 'team_id'])
df.show()

-------+-------+
|user_id|team_id|
+-------+-------+
|     11|    101|
|     11|    102|
|     22|    111|
|     22|    112|
|     22|    101|
|     33|    101|
|     44|    102|
+-------+-------+


df.toPandas().groupby('user_id')[
        'team_id'].apply(list).to_dict()


Result: 
{11: [101, 102], 22: [111, 112, 101], 33: [101], 44: [102]}

Looking for efficient way in pyspark to create the above multidict.

You can aggregate the team_id column as list and then collect the rdd as dictionary using collectAsMap method:

mport pyspark.sql.functions as F

df.groupBy("user_id").agg(F.collect_list("team_id")).rdd.collectAsMap()
# {33: [101], 11: [101, 102], 44: [102], 22: [111, 112, 101]}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM