簡體   English   中英

Pyspark將元組的RDD轉換為數據框

[英]Pyspark Convert RDD of tuples to Dataframe

我有一個元組的rdd,其中前兩行如下所示:

[[('n', 12.012457082117459), ('s', 0.79112758892014912)],
[('t', 3.6243409329763652),('vn', 3.6243409329763652),('n', 52.743253562212828),('v', 11.644347760553064)]]

在每個元組中,第一個值(例如:'n','s','t')是所需的列名,第二個值(例如:12.012、0.7911 ....)是每個列的所需值。 但是,在rdd的每個列表(行)中,我們可以看到並不是所有的列名都存在。 例如,在第一行中,僅

'n', 's' 

出現,而沒有

's' 

在第二行。 因此,我想將此rdd轉換為數據框,其中對於未顯示在原始元組中的列,其值應為0。 換句話說,前兩行可能看起來像這樣:

n     s      t       vn     omitted.....
12    0.79   0       0      ..... 
52    0      3.62    3.62    .......

我嘗試了以下操作:

row = Row('l','eng','q','g','j','b','nt','z','n','d','f','i','k','s','vn','nz','v','nrt','tg','nrfg','t','ng','zg','a')
df = tup_sum_data.map(row).toDF()

Row()中的字符串是我想要的列名。 但我得到以下錯誤:

TypeError                                 Traceback (most recent call last)
/Users/1/Documents/spark/python/pyspark/sql/types.py in _infer_type(obj)
968         try:
--> 969             return _infer_schema(obj)
970         except TypeError:

/Users/1/Documents/spark/python/pyspark/sql/types.py in _infer_schema(row)
991     else:
--> 992         raise TypeError("Can not infer schema for type: %s" % type(row))
993 

TypeError: Can not infer schema for type: <class 'numpy.float64'>
During handling of the above exception, another exception occurred:
TypeError                                 Traceback (most recent call last)
/Users/1/Documents/spark/python/pyspark/sql/types.py in _infer_type(obj)
968         try:
--> 969             return _infer_schema(obj)
970         except TypeError:

/Users/1/Documents/spark/python/pyspark/sql/types.py in _infer_type(obj)
969             return _infer_schema(obj)
970         except TypeError:
--> 971             raise TypeError("not supported type: %s" % type(obj))
972 
973 

TypeError: not supported type: <class 'tuple'>

錯誤代碼中的某些行被省略。 誰能幫我找出解決方法? 謝謝 !

UPDATE我將數據類型從np.float64轉換為float,並且沒有錯誤。 但是,數據框看起來並不像我想要的那樣。 它看起來像這樣:

+--------------------+
|                   l|
+--------------------+
|[[n,12.0124570821...|
|[[t,3.62434093297...|
|[[a,0.44628710262...|
|[[n,16.7534769832...|
|[[n,17.6017774340...|
+--------------------+
only showing top 5 rows

那么有人可以幫助我如何獲取格式正確的數據框嗎? 謝謝 !

from pyspark.sql.types import *
from pyspark.sql import *

data_frame_schema = StructType([
    StructField("n", FloatType()),
    StructField("s", FloatType()),
    StructField("t", FloatType()),
    StructField("v", FloatType()),
    StructField("vn", FloatType())
])

raw_list = [[('n', 12.012457082117459), ('s', 0.79112758892014912)], \
[('t', 3.6243409329763652),('vn', 3.6243409329763652),('n', 52.743253562212828),('v', 11.644347760553064)]]

raw_rdd = sc.parallelize(raw_list)

# dict_to_row = lambda d: Row(n=d.get("n"), s=d.get("s"), t=d.get("t"), v=d.get("v"), vn=d.get("vn"))
dict_to_row = lambda d: Row(n=d.get("n", 0.0), s=d.get("s", 0.0), t=d.get("t", 0.0), v=d.get("v", 0.0), vn=d.get("vn", 0.0))

row_rdd = raw_rdd.map(lambda l: dict_to_row(dict(l)))
df = spark.createDataFrame(row_rdd, data_frame_schema)
df.show()

將以上內容粘貼到pyspark shell中會產生輸出:

+---------+----------+--------+---------+--------+
|        n|         s|       t|        v|      vn|
+---------+----------+--------+---------+--------+
|12.012457|0.79112756|     0.0|      0.0|     0.0|
| 52.74325|       0.0|3.624341|11.644348|3.624341|
+---------+----------+--------+---------+--------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM