繁体   English   中英

如何在 PySpark 中将字典转换为 dataframe?

[英]How to convert a dictionary to dataframe in PySpark?

我正在尝试将字典: data_dict = {'t1': '1', 't2': '2', 't3': '3'}转换为 dataframe:

key   |   value|
----------------
t1          1
t2          2
t3          3

为此,我尝试了:

schema = StructType([StructField("key", StringType(), True), StructField("value", StringType(), True)])
ddf = spark.createDataFrame(data_dict, schema)

但我收到以下错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/session.py", line 748, in createDataFrame
    rdd, schema = self._createFromLocal(map(prepare, data), schema)
  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/session.py", line 413, in _createFromLocal
    data = list(data)
  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/session.py", line 730, in prepare
    verify_func(obj)
  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/types.py", line 1389, in verify
    verify_value(obj)
  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/types.py", line 1377, in verify_struct
    % (obj, type(obj))))
TypeError: StructType can not accept object 't1' in type <class 'str'>

所以我在没有指定任何架构但只指定列数据类型的情况下尝试了这个: ddf = spark.createDataFrame(data_dict, StringType() & ddf = spark.createDataFrame(data_dict, StringType(), StringType())

但两者都导致 dataframe 有一列是字典的键,如下所示:

+-----+
|value|
+-----+
|t1   |
|t2   |
|t3   |
+-----+

谁能让我知道如何将字典转换为 PySpark 中的火花 dataframe?

您可以使用data_dict.items()列出键/值对:

spark.createDataFrame(data_dict.items()).show()

哪个打印

+---+---+
| _1| _2|
+---+---+
| t1|  1|
| t2|  2|
| t3|  3|
+---+---+

当然,您可以指定您的架构:

spark.createDataFrame(data_dict.items(), 
                      schema=StructType(fields=[
                          StructField("key", StringType()), 
                          StructField("value", StringType())])).show()

导致

+---+-----+
|key|value|
+---+-----+
| t1|    1|
| t2|    2|
| t3|    3|
+---+-----+

我只想补充一点,如果您有一本包含对col: list[vals]的字典

例如:

{
 "col1" : [1,2,3],
 "col2" : ["a", "b", "c"]
}

一个可能的解决方案是:

columns = list(raw_data.keys())
data = [[*vals] for vals in zip(*raw_data.values())]
df = spark.createDataFrame(data, columns)

但我是 pyspark 的新手,我想还有更好的方法吗?

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM