簡體   English   中英

Spark:如何將元組變成 DataFrame

[英]Spark:How to turn tuple into DataFrame

我有train_rdd喜歡(('a',1),('b',2),('c',3)) 我用下面的方法把它變成DataFrame

from pyspark.sql import Row
train_label_df = train_rdd.map(lambda x: (Row(**dict(x)))).toDF()

但可能某些 RDDS 中缺少某些鍵。 所以會出現錯誤。

File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000017/pyspark.zip/pyspark/worker.py", line
253, in main
process()
File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000017/pyspark.zip/pyspark/worker.py", line
248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000002/pyspark.zip/pyspark/rdd.py", line
2440, in pipeline_func
File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000002/pyspark.zip/pyspark/rdd.py", line
2440, in pipeline_func
File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000002/pyspark.zip/pyspark/rdd.py", line
350, in func
File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000002/pyspark.zip/pyspark/rdd.py", line
1859, in combineLocally
File
"/mnt/hadoop/yarn/local/usercache/hdfs/appcache/application_/container_05_000017/pyspark.zip/pyspark/shuffle.py", line
237, in mergeValues
for k, v in iterator:
    TypeError: cannot unpack non - iterable NoneType object

將元組類型 RDD 轉換為 DataFrame 的任何其他方式?


更新:

我也嘗試使用createDataFrame

 rdd = sc.parallelize([('a',1), (('a',1), ('b',2)), (('a',1), ('b',2), ('c',3) ) ])
schema = StructType([
        StructField("a", StringType(), True),
        StructField("b", StringType(), True),
        StructField("c", StringType(), True),
])
train_label_df = sqlContext.createDataFrame(rdd,  schema)
train_label_df.show()

發生錯誤。

  File "/home/spark/python/pyspark/sql/types.py", line 1400, in verify_struct
    "length of fields (%d)" % (len(obj), len(verifiers))))
ValueError: Length of object (2) does not match with length of fields (3)

您可以將 map 元組轉換為字典:

rdd1 = rdd.map(lambda x: dict(x if isinstance(x[0],tuple) else [x]))

然后執行以下操作之一:

from pyspark.sql import Row 

cols = ["a", "b", "c"]

rdd1.map(lambda x: Row(**{c:x.get(c) for c in cols})).toDF().show()
+---+----+----+
|  a|   b|   c|
+---+----+----+
|  1|null|null|
|  1|   2|null|
|  1|   2|   3|
+---+----+----+

或者

rdd1.map(lambda x: tuple(x.get(c) for c in cols)).toDF(cols).show()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM