[英]PySpark - Create a Dataframe from a dictionary with list of values for each key
I've this type of dictionary:我有这种类型的字典:
{'xy': [['value1', 'value2'], ['value3', 'value4']],
'yx': [['value5', 'value6'], ['value7', 'value8']]}
I would like to create a dataFrame pyspark in which I have 3 columns and 2 rows.我想创建一个 dataFrame pyspark ,其中我有 3 列和 2 行。 Every key of the dict has a row. dict 的每个键都有一行。 For example, first row:例如,第一行:
First column: xy
Second column: ["value1", "value2"]
Third column: ["value3", "value4"]
What's the better way to do this?有什么更好的方法来做到这一点? I'm only able to create 2 columns, in which there is a key and only one column with all the list but it's not my desired result.我只能创建 2 列,其中有一个键,并且只有一列包含所有列表,但这不是我想要的结果。
This is your data dictionary:这是您的数据字典:
data = {
'xy': [['value1', 'value2'], ['value3', 'value4']],
'yx': [['value5', 'value6'], ['value7', 'value8']]
}
You can just use a for loop:您可以只使用 for 循环:
df = spark.createDataFrame(
[[k] + v for k, v in data.items()],
schema=['col1', 'col2', 'col3']
)
df.show(10, False)
+----+----------------+----------------+
|col1|col2 |col3 |
+----+----------------+----------------+
|xy |[value1, value2]|[value3, value4]|
|yx |[value5, value6]|[value7, value8]|
+----+----------------+----------------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.