簡體   English   中英

通過使用 foreach 方法處理舊 dataframe 創建新 pyspark dataframe 時出現泡菜錯誤

[英]Pickle error while creating new pyspark dataframe by processing old dataframe using foreach method

給定一個 pyspark dataframe given_df ,我需要用它來生成一個新的 dataframe new_df

我正在嘗試使用foreach()方法逐行處理 pyspark dataframe 。 可以說,為簡單起見,數據幀given_dfnew_df都由單列組成。

我必須處理此 dataframe 的每一行,並根據該單元格中存在的值,我正在創建一些新行並將其添加到new_df ,方法是將其與行union 在處理given_df的單行時將生成的行數是可變的。

new_df=spark.createDataFrame([], schema=['SampleField']) // Create an empty dataframe initially

given_df.foreach(func) // given_df already contains some data loaded. Now I run a function for each row.

def func(row):
    rows_to_append = getNewRowsAfterProcessingCurrentRow(row)
    global new_df // without this line, the next line will result in an error, because it will think that new_df is a local variable and we are trying to access it without defining it first.
    new_df=new_df.union(spark.createDataFrame(data=rows_to_append, schema=['SampleField'])

但是,這會導致 pickle 錯誤。

如果 union function 被注釋掉,則不會發生錯誤。

PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.

Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/serializers.py", line 476, in dumps
    return cloudpickle.dumps(obj, pickle_protocol)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 1097, in dumps
    cp.dump(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 356, in dump
    return Pickler.dump(self, obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 437, in dump
    self.save(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 789, in save_tuple
    save(element)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
    self._batch_appends(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 843, in _batch_appends
    save(x)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
    self._batch_appends(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 843, in _batch_appends
    save(x)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
    self._batch_appends(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 843, in _batch_appends
    save(x)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
    self._batch_appends(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 846, in _batch_appends
    save(tmp[0])
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
    self._batch_appends(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 846, in _batch_appends
    save(tmp[0])
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 500, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 819, in save_list
    self._batch_appends(obj)
  File "/databricks/python/lib/python3.7/pickle.py", line 846, in _batch_appends
    save(tmp[0])
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 495, in save_function
    self.save_function_tuple(obj)
  File "/databricks/spark/python/pyspark/cloudpickle.py", line 729, in save_function_tuple
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 549, in save
    self.save_reduce(obj=obj, *rv)
  File "/databricks/python/lib/python3.7/pickle.py", line 662, in save_reduce
    save(state)
  File "/databricks/python/lib/python3.7/pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "/databricks/python/lib/python3.7/pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "/databricks/python/lib/python3.7/pickle.py", line 885, in _batch_setitems
    save(v)
  File "/databricks/python/lib/python3.7/pickle.py", line 524, in save
    rv = reduce(self.proto)
  File "/databricks/spark/python/pyspark/context.py", line 356, in __getnewargs__
    "It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.

為了更好地理解我想要做什么,讓我舉一個例子來說明一個可能的用例:

假設given_df是一個 dataframe 句子,其中每個句子由一些用空格分隔的單詞組成。

given_df=spark.createDataframe([("The old brown fox",), ("jumps over",), ("the lazy log",)], schema=["SampleField"])

new_df 是一個 dataframe 由不同行的每個單詞組成。 因此,我們將處理given_df的每一行,並根據我們通過拆分行獲得的單詞,將每一行插入到new_df中。

new_df=spark.createDataFrame([("The",), ("old",), ("brown",), ("fox",), ("jumps",), ("over",), ("the",), ("lazy",), ("dog",)], schema=["SampleField"])

您正在嘗試在不允許的執行程序上使用 DataFrame API ,因此出現PicklingError

PicklingError:無法序列化 object:異常:您似乎正在嘗試從廣播變量、操作或轉換中引用 SparkContext。 SparkContext 只能在驅動程序上使用,不能在它在工作人員上運行的代碼中使用。 有關詳細信息,請參閱 SPARK-5063。


你應該重寫你的代碼。 例如,您可以使用RDD.flatMap ,或者,如果您更喜歡 DataFrame API, explode() function。

以下是使用后一種方法的方法:

given_df=spark.createDataFrame([("The old brown fox",), ("jumps over",), ("the lazy log",)], schema=["SampleField"])

from pyspark.sql.functions import udf, explode
from pyspark.sql.types import ArrayType, StringType

@udf(returnType=ArrayType(StringType()))
def getNewRowsAfterProcessingCurrentRow(str):
  return str.split()

new_df= given_df\
  .select(explode(getNewRowsAfterProcessingCurrentRow("SampleField")).alias("SampleField"))\
  .unionAll(given_df)

new_df.show()

  1. 您將getNewRowsAfterProcessingCurrentRow() udf()在 udf() 中。 這只會使您的 function 在 DataFrame API 中可用。
  2. 然后,您使用您的 function 包裹在另一個名為explode()的 function 中。 這是必需的,因為您想將拆分的句子“分解”(或轉置)為多行,每行一個單詞。
  3. 最后,將生成的 DataFrame 與原始的given_df

output:

+-----------------+
|      SampleField|
+-----------------+
|              The|
|              old|
|            brown|
|              fox|
|            jumps|
|             over|
|              the|
|             lazy|
|              log|
|The old brown fox|
|       jumps over|
|     the lazy log|
+-----------------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM