[英]How to split list of dictionary in one column into two columns in pyspark dataframe?
I want to split the filteredaddress column of the spark dataframe above into two new columns that are Flag and Address:我想将上面的 spark dataframe 的过滤地址列拆分为两个新列,即标志和地址:
customer_id|pincode|filteredaddress| Flag| Address
1000045801 |121005 |[{'flag':'0', 'address':'House number 172, Parvatiya Colony Part-2 , N.I.T'}]
1000045801 |121005 |[{'flag':'1', 'address':'House number 172, Parvatiya Colony Part-2 , N.I.T'}]
1000045801 |121005 |[{'flag':'1', 'address':'House number 172, Parvatiya Colony Part-2 , N.I.T'}]
Can anyone please tell me how can I do it?谁能告诉我我该怎么做?
You can get the values from filteredaddress
map column using the keys:您可以使用以下键从filteredaddress
地址 map 列中获取值:
df2 = df.selectExpr(
'customer_id', 'pincode',
"filteredaddress['flag'] as flag", "filteredaddress['address'] as address"
)
Other ways to access map values are:访问 map 值的其他方法是:
import pyspark.sql.functions as F
df.select(
'customer_id', 'pincode',
F.col('filteredaddress')['flag'],
F.col('filteredaddress')['address']
)
# or, more simply
df.select(
'customer_id', 'pincode',
'filteredaddress.flag',
'filteredaddress.address'
)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.