簡體   English   中英

使用 RDD 在 PySpark 中創建 dataframe

[英]Create a dataframe in PySpark using RDD

我正在嘗試創建一個 function ,它將接受一個字典和模式作為輸入,並返回一個數據框,自動將未指定的字段填充為空值。 這是我下面的代碼

def get_element(name, row_dict):
    value = None
    if name in row_dict:
        value = row_dict[name]

    return value


def create_row(schema, row_dict):
    row_tuple = ()
    for fields in schema:
        element = get_element(fields.name, row_dict)
        row_tuple = (*row_tuple, element)

    return row_tuple


def fill(schema, values):
    spark = (
        SparkSession
            .builder
            .master("local[*]")
            .appName("pysparktest")
            .getOrCreate()
    )
    return \
        spark.createDataFrame(
            spark.sparkContext.parallelize(
                [(Row(create_row(schema.fields, row_dict)) for row_dict in values)]
            ),
            schema
        )

這就是我調用 function 的方式:

   schema = T.StructType([T.StructField("base_currency", T.StringType()),
                           T.StructField("target_currency", T.StringType()),
                           T.StructField("valid_from", T.StringType()),
                           T.StructField("valid_until", T.StringType())])

    values = [
        {"base_currency": "USD", "target_currency": "EUR", "valid_from": "test",
         "valid_until": "test"},
        {"base_currency": "USD1", "target_currency": "EUR2"}
    ]

    fill(schema, values).show()

錯誤信息:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_utilities/create_df_from_schema.py:37: in fill
    [(Row(create_row(schema.fields, row_dict)) for row_dict in values)]
../../../.virtualenv/etl-orderlines-generic-pivot/lib/python3.7/site-packages/pyspark/context.py:566: in parallelize
    jrdd = self._serialize_to_jvm(c, serializer, reader_func, createRDDServer)
../../../.virtualenv/etl-orderlines-generic-pivot/lib/python3.7/site-packages/pyspark/context.py:603: in _serialize_to_jvm
    serializer.dump_stream(data, tempFile)
../../../.virtualenv/etl-orderlines-generic-pivot/lib/python3.7/site-packages/pyspark/serializers.py:211: in dump_stream
    self.serializer.dump_stream(self._batched(iterator), stream)
../../../.virtualenv/etl-orderlines-generic-pivot/lib/python3.7/site-packages/pyspark/serializers.py:133: in dump_stream
    self._write_with_length(obj, stream)
../../../.virtualenv/etl-orderlines-generic-pivot/lib/python3.7/site-packages/pyspark/serializers.py:143: in _write_with_length
    serialized = self.dumps(obj)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = PickleSerializer()
obj = [<generator object fill.<locals>.<genexpr> at 0x1091b9350>]

    def dumps(self, obj):
>       return pickle.dumps(obj, pickle_protocol)
E       TypeError: can't pickle generator objects

../../../.virtualenv/etl-orderlines-generic-pivot/lib/python3.7/site-packages/pyspark/serializers.py:427: TypeError

不知何故,構造數據框的語法不正確。

您已經從create_row function 返回元組,您不需要創建Row object,只需將元組列表傳遞給spark.createDataFrame ,如下所示:

def fill(schema, values):
    return spark.createDataFrame(
            [create_row(schema.fields, row_dict) for row_dict in values],
            schema
        )

現在您可以致電:

fill(schema, values).show()

#+-------------+---------------+----------+-----------+
#|base_currency|target_currency|valid_from|valid_until|
#+-------------+---------------+----------+-----------+
#|          USD|            EUR|      test|       test|
#|         USD1|           EUR2|      null|       null|
#+-------------+---------------+----------+-----------+

此外,您實際上可以將代碼簡化為單行列表理解,而無需定義這些函數:

spark.createDataFrame(
    [[row.get(f.name) for f in schema.fields] for row in values],
    schema
).show()

如果key不存在,對字典 object 調用.get(key)將返回 None。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM