繁体   English   中英

将 Pipeline RDD 转换为 Spark dataframe

[英]Convert a Pipeline RDD into a Spark dataframe

从此开始:

items.take(2)
[['home', 'alone', 'apparently'], ['st','louis','plant','close','die','old','age','workers','making','cars','since','onset','mass','automotive','production','1920s']]

type(items)
pyspark.rdd.PipelinedRDD

我想将其转换为 Spark dataframe,每个单词列表有一列和一行。

您可以使用toDF创建一个 dataframe ,但请记住先将每个列表包装在一个列表中,以便 Spark 可以理解每行只有一列。

df = items.map(lambda x: [x]).toDF(['words'])

df.show(truncate=False)
+------------------------------------------------------------------------------------------------------------------+
|words                                                                                                             |
+------------------------------------------------------------------------------------------------------------------+
|[home, alone, apparently]                                                                                         |
|[st, louis, plant, close, die, old, age, workers, making, cars, since, onset, mass, automotive, production, 1920s]|
+------------------------------------------------------------------------------------------------------------------+

df.printSchema()
root
 |-- words: array (nullable = true)
 |    |-- element: string (containsNull = true)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM