[英]split a array columns into rows pyspark
我有一個類似於以下內容的DataFrame
:
new_df = spark.createDataFrame([
([['hello', 'productcode'], ['red','color']], 7),
([['hi', 'productcode'], ['blue', 'color']], 8),
([['hoi', 'productcode'], ['black','color']], 7)
], ["items", "frequency"])
new_df.show(3, False)
# +------------------------------------------------------------+---------+
# |items |frequency|
# +------------------------------------------------------------+---------+
# |[WrappedArray(hello, productcode), WrappedArray(red, color)]|7 |
# |[WrappedArray(hi, productcode), WrappedArray(blue, color)] |8 |
# |[WrappedArray(hoi, productcode), WrappedArray(black, color)]|7 |
# +------------------------------------------------------------+---------+
我需要生成一個類似於以下內容的新DataFrame
:
# +-------------------------------------------
# |productcode | color |frequency|
# +-------------------------------------------
# |hello | red | 7 |
# |hi | blue | 8 |
# |hoi | black | 7 |
# +--------------------------------------------
您可以將項目轉換為map
:
from pyspark.sql.functions import *
from operator import itemgetter
@udf("map<string, string>")
def as_map(vks):
return {k: v for v, k in vks}
remapped = new_df.select("frequency", as_map("items").alias("items"))
收集密鑰:
keys = remapped.select("items").rdd \
.flatMap(lambda x: x[0].keys()).distinct().collect()
並選擇:
remapped.select([col("items")[key] for key in keys] + ["frequency"])
+------------+------------------+---------+
|items[color]|items[productcode]|frequency|
+------------+------------------+---------+
| red| hello| 7|
| blue| hi| 8|
| black| hoi| 7|
+------------+------------------+---------+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.