簡體   English   中英

在pyspark中,在s3上覆蓋csv文件失敗

[英]Overwrite csv file on s3 fails in pyspark

當我從s3存儲桶將數據加載到pyspark數據幀中然后進行一些操作(join,union)然后我嘗試覆蓋之前讀過的相同路徑('data / csv /') 我收到這個錯誤:

py4j.protocol.Py4JJavaError: An error occurred while calling o4635.save.
: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 200 in stage 120.0 failed 4 times, most recent failure: Lost task 200.3 in stage 120.0: java.io.FileNotFoundException: Key 'data/csv/part-00000-68ea927d-1451-4a84-acc7-b91e94d0c6a3-c000.csv' does not exist in S3
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
csv_a = spark \
    .read \
    .format('csv') \
    .option("header", "true") \
    .load('s3n://mybucket/data/csv') \
    .where('some condition')

csv_b = spark \
    .read \
    .format('csv') \
    .option("header", "true") \
    .load('s3n://mybucket/data/model/csv')
    .alias('csv')

# Reading glue categories data
cc = spark \
    .sql("select * from mydatabase.mytable where month='06'") \
    .alias('cc')

# Joining and Union
output = csv_b \
    .join(cc, (csv_b.key == cc.key), 'inner') \
    .select('csv.key', 'csv.created_ts', 'cc.name', 'csv.text') \
    .drop_duplicates(['key']) \
    .union(csv_a) \
    .orderBy('name') \
    .coalesce(1) \
    .write \
    .format('csv') \
    .option('header', 'true') \
    .mode('overwrite') \
    .save('s3n://mybucket/data/csv')

我需要從s3位置讀取數據,然后加入,與另一個數據聯合,最后覆蓋初始路徑,只保留一個帶有干凈連接數據的csv文件。

如果我嘗試從另一個s3路徑讀取(加載)數據與我需要覆蓋的不一樣,它可以工作並覆蓋正常。

任何想法為什么會發生這種錯誤?

當您從文件夾中讀取數據,修改它並保存在您最初讀取的數據之上時,spark會嘗試覆蓋s3上的相同鍵(hdfs上的文件)等...

我找到了2個選項:

  1. 將數據保存到臨時文件夾,然后再次讀取
  2. 使用df.persist()轉儲到內存,磁盤或兩者

通過添加.persist(StorageLevel.MEMORY_AND_DISK)解決

output = csv_b \
    .join(cc, (csv_b.key == cc.key), 'inner') \
    .select('csv.key', 'csv.created_ts', 'cc.name', 'csv.text') \
    .drop_duplicates(['key']) \
    .union(csv_a) \
    .orderBy('name') \
    .coalesce(1) \
    .persist(StorageLevel.MEMORY_AND_DISK) \
    .write \
    .format('csv') \
    .option('header', 'true') \
    .mode('overwrite') \
    .save('s3n://mybucket/data/csv')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM