簡體   English   中英

Pyspark:將元組類型RDD轉換為DataFrame

[英]Pyspark: convert tuple type RDD to DataFrame

我有一個復雜的tuple類型 RDD,比如

(20190701, [11,21,31], [('A',10), ('B', 20)])

架構可以自己定義。

那么如何把它變成一個DataFrame,像這樣:

date | 0 | 1 | 2 | A | B 
20190701 | 11 | 21 | 31 | 10 | 20

單程:

from pyspark.sql import Row

rdd = sc.parallelize([(20190701, [11,21,31], [('A',10), ('B', 20)])])

# customize a Row class based on schema    
MRow = Row("date", "0", "1", "2", "A", "B")

rdd.map(lambda x: MRow(x[0], *x[1], *map(lambda e:e[1],x[2]))).toDF().show()
+--------+---+---+---+---+---+
|    date|  0|  1|  2|  A|  B|
+--------+---+---+---+---+---+
|20190701| 11| 21| 31| 10| 20|
+--------+---+---+---+---+---+

或者另一種方式:

rdd.map(lambda x: Row(date=x[0], **dict((str(i), e) for i,e in list(enumerate(x[1])) + x[2]))).toDF().show()
+---+---+---+---+---+--------+
|  0|  1|  2|  A|  B|    date|
+---+---+---+---+---+--------+
| 11| 21| 31| 10| 20|20190701|
+---+---+---+---+---+--------+
rdd = sc.parallelize((20190701, [11,21,31], [('A',10), ('B', 20)]))

elements = rdd.take(3)

a = [elements[0]] + (elements[1]) + [elements[2][0][1], elements[2][1][1]]

import pandas as pd
sdf = spark.createDataFrame(pd.DataFrame([20190701, 11, 21, 31, 10, 20]).T)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM