[英]How to convert Pyspark dataframe to Python Dictionary
我是pyspark的新手,有如下要求
数据框具有两列,其中(id和data_list)的data_list的排序方式如下:
+---+-----+-----+
| id| data|value|
+---+-----+-----+
|1_a|AB,Ca| 10|
|1_a|Cd,da| 5|
|1_a|aC,BE| 15|
|1_a|ER,rK| 20|
|2_b|JK,Lh| 1500|
|2_b|Yu,HK| 500|
|2_b|MK,HN| 100|
+---+-----+-----+
排序后的data_list
+---+--------------------+
| id| data_list|
+---+--------------------+
|1_a|[Cd,da, AB,Ca, aC...|
|2_b|[MK,HN, Yu,HK, JK...|
+---+--------------------+
在DF上应用地图转换以获取所需的(列表Python字典)输出,
data = order_df.rdd.map(lambda (x, y): (x.split("_")[1].lower(), (x.split("_")[0].lower(), y))) \
.groupByKey().mapValues(list)
产量
[('b', [('2', '[MK,HN, Yu,HK, JK,Lh]')]), ('a', [('1', '[Cd,da, AB,Ca, aC,BE, ER,rK]')])]
然后迭代列表以获取每个元素,如下所示
for dd in data.collect():
print "==========", dd[1][0][1]
for r in dd[1][0][1]:
print r + "---"
Cd,da
AB,Ca
aC,BE
ER,rK
....
但得到如下
========== [Cd,da, AB,Ca, aC,BE, ER,rK]
ttttt: <type 'str'>
[
C
d
,
d
a
,
A
B
,
C
a
,
a
C
,
B
E
,
E
R
,
r
K
]
下面是尝试获取输出的代码。
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql import functions as F
import operator
conf = SparkConf().setMaster("local").setAppName("Demo DF")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sparkContext=sc)
sqlContext.setConf("spark.sql.shuffle.partitions", "3")
def foo((x, y)):
z = x.lower().split('_')
return (z[1], (z[0], ast.literal_eval(json.dumps(y,
ensure_ascii=False).encode('utf8'))))
# define udf
def sorter(l):
res = sorted(l, key=operator.itemgetter(1))
return [item[0] for item in res]
sort_udf = F.udf(sorter)
ll_list = [("1_a", "AB,Ca", 10), ("1_a", "Cd,da", 5), ("1_a", "aC,BE", 15), ("1_a", "ER,rK", 20),
("2_b", "JK,Lh", 1500), ("2_b", "Yu,HK", 500), ("2_b", "MK,HN", 100)]
input_df = sc.parallelize(ll_list).toDF(["id", "data", "value"])
input_df.show()
# create list column
grouped_df = input_df.groupby("id") \
.agg(F.collect_list(F.struct("data", "value")) \
.alias("list_col"))
# test
order_df = grouped_df.select("id", sort_udf("list_col") \
.alias("data_list"))
order_df.show()
data = order_df.rdd.map(foo).groupByKey().mapValues(list)
for dd in data.collect():
print "==========", dd[1][0][1]
for r in dd[1][0][1]:
print r + "---"
您能否请任何人用这段代码帮助我,以获取正确的输出。
问题是“ data_list”实际上是一串字符串:
order_df.dtypes
# [('id', 'string'), ('data_list', 'string')]
您可以使用ast.literal_eval
来解析它们。
import ast
def foo((x, y)):
z = x.lower().split('_')
return (z[1], (z[0], ast.literal_eval(y)))
order_df.rdd.map(foo).groupByKey().mapValues(list).collect()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.