簡體   English   中英

將 Pipeline RDD 轉換為 Spark dataframe

[英]Convert a Pipeline RDD into a Spark dataframe

從此開始:

items.take(2)
[['home', 'alone', 'apparently'], ['st','louis','plant','close','die','old','age','workers','making','cars','since','onset','mass','automotive','production','1920s']]

type(items)
pyspark.rdd.PipelinedRDD

我想將其轉換為 Spark dataframe,每個單詞列表有一列和一行。

您可以使用toDF創建一個 dataframe ,但請記住先將每個列表包裝在一個列表中,以便 Spark 可以理解每行只有一列。

df = items.map(lambda x: [x]).toDF(['words'])

df.show(truncate=False)
+------------------------------------------------------------------------------------------------------------------+
|words                                                                                                             |
+------------------------------------------------------------------------------------------------------------------+
|[home, alone, apparently]                                                                                         |
|[st, louis, plant, close, die, old, age, workers, making, cars, since, onset, mass, automotive, production, 1920s]|
+------------------------------------------------------------------------------------------------------------------+

df.printSchema()
root
 |-- words: array (nullable = true)
 |    |-- element: string (containsNull = true)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM