简体   繁体   中英

transform columns values to columns in pyspark dataframe

I would like to transform the values of a column into multiple columns of a dataframe in pyspark on databricks.

eg

from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()

df = spark._sc.parallelize([["dapd", "shop", "retail"],
    ["dapd", "shop", "on-line"],
    ["dapd", "payment", "credit"],
    ["wrfr", "shop", "supermarket"],
    ["wrfr", "shop", "brand store"],
    ["wrfr", "payment", "cash"]]).toDF(["id", "value1", "value2"])

I need to transform it to:

id,     shop                       payment
dapd    retail|on-line             credit
wrfr    supermarket|brand store    cash

I am not sure how I can do this in pyspark?

Thanks,

What you're looking for aa combination of pivot and aggregation functions, such as collect_list() or collect_set() . Have a look at the available aggregation functions here: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg#module-pyspark.sql.functions . Here's some code example:

from pyspark.sql import SparkSession
import pyspark.sql.functions as f

df = spark._sc.parallelize([
    ["dapd", "shop", "retail"],
    ["dapd", "shop", "on-line"],
    ["dapd", "payment", "credit"],
    ["wrfr", "shop", "supermarket"],
    ["wrfr", "shop", "brand store"],
    ["wrfr", "payment", "cash"]]
).toDF(["id", "value1", "value2"])

df.show()
+----+-------+-----------+
|  id| value1|     value2|
+----+-------+-----------+
|dapd|   shop|     retail|
|dapd|   shop|    on-line|
|dapd|payment|     credit|
|wrfr|   shop|supermarket|
|wrfr|   shop|brand store|
|wrfr|payment|       cash|
+----+-------+-----------+


df.groupBy('id').pivot('value1').agg(f.collect_list("value2")).show(truncate=False)
+----+--------+--------------------------+
|id  |payment |shop                      |
+----+--------+--------------------------+
|dapd|[credit]|[retail, on-line]         |
|wrfr|[cash]  |[supermarket, brand store]|
+----+--------+--------------------------+

there is something like this you can do.

newdf=df.groupby('id').pivot('value1').agg(func.collect_list(func.col('value2')))
newdf=newdf.withColumn('shop',func.concat_ws('|',func.col('shop')[0],func.col('shop')[1]))
newdf=newdf.withColumn('payment',func.col('payment')[0])
newdf.show(20, False)
+----+-------+-----------------------+
|id  |payment|shop                   |
+----+-------+-----------------------+
|dapd|credit |retail|on-line         |
|wrfr|cash   |brand store|supermarket|
+----+-------+-----------------------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM