簡體   English   中英

Spark不斷向死執行者提交任務

[英]Spark keeps submitting tasks to dead executor

我正在研究 apache spark,但我正面臨一個非常奇怪的問題。 其中一個執行程序因 OOM 而失敗,其關閉掛鈎清除了所有存儲(內存和磁盤),但顯然由於 PROCESS_LOCAL 任務,驅動程序一直在同一執行程序上提交失敗的任務。

現在該機器上的存儲已清除,所有重試任務也失敗導致整個階段失敗(重試 4 次后)

我不明白的是,驅動程序怎么可能不知道執行程序處於關閉狀態並且無法執行任何任務。

配置:

  • 心跳間隔:60s

  • 網絡超時:600s

日志以確認執行程序正在接受關閉后的任務

20/09/29 20:26:32 ERROR [Executor task launch worker for task 138513] Executor: Exception in task 6391.0 in stage 17.0 (TID 138513)
java.lang.OutOfMemoryError: Java heap space
20/09/29 20:26:32 ERROR [Executor task launch worker for task 138513] SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker for task 138513,5,main]
java.lang.OutOfMemoryError: Java heap space
20/09/29 20:26:32 INFO [pool-8-thread-1] DiskBlockManager: Shutdown hook called
20/09/29 20:26:35 ERROR [Executor task launch worker for task 138295] Executor: Exception in task 6239.0 in stage 17.0 (TID 138295)
java.io.FileNotFoundException: /storage/1/spark/spark-ba168da6-dc11-4e15-bd95-1e58198c81e7/executor-8dea198c-741a-4733-8fbb-df57241acdd5/blockmgr-1fc6b30a-c24e-4bb2-a133-5e411cef810f/35/temp_shuffle_b5df90ac-78de-48e3-9c2d-891f8b2ce1fa (No such file or directory)
20/09/29 20:26:36 ERROR [Executor task launch worker for task 139484] Executor: Exception in task 6587.0 in stage 17.0 (TID 139484)
org.apache.spark.SparkException: Block rdd_3861_6587 was not found even though it's read-locked
20/09/29 20:26:42 WARN [Thread-2] ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException: null
    at java.util.concurrent.FutureTask.get(FutureTask.java:205) ~[?:1.8.0_172]
20/09/29 20:26:44 ERROR [Executor task launch worker for task 140256] Executor: Exception in task 6576.3 in stage 17.0 (TID 140256)
java.io.FileNotFoundException: /storage/1/spark/spark-ba168da6-dc11-4e15-bd95-1e58198c81e7/executor-8dea198c-741a-4733-8fbb-df57241acdd5/blockmgr-1fc6b30a-c24e-4bb2-a133-5e411cef810f/30/rdd_3861_6576 (No such file or directory)
20/09/29 20:26:44 INFO [dispatcher-event-loop-0] Executor: Executor is trying to kill task 6866.1 in stage 17.0 (TID 140329), reason: stage cancelled
20/09/29 20:26:47 INFO [pool-8-thread-1] ShutdownHookManager: Shutdown hook called
20/09/29 20:26:47 DEBUG [pool-8-thread-1] Client: stopping client from cache: org.apache.hadoop.ipc.Client@3117bde5
20/09/29 20:26:47 DEBUG [pool-8-thread-1] Client: stopping client from cache: org.apache.hadoop.ipc.Client@3117bde5
20/09/29 20:26:47 DEBUG [pool-8-thread-1] Client: removing client from cache: org.apache.hadoop.ipc.Client@3117bde5
20/09/29 20:26:47 DEBUG [pool-8-thread-1] Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@3117bde5
20/09/29 20:26:47 DEBUG [pool-8-thread-1] Client: Stopping client
20/09/29 20:26:47 DEBUG [Thread-2] ShutdownHookManager: ShutdownHookManger complete shutdown
20/09/29 20:26:55 INFO [dispatcher-event-loop-14] CoarseGrainedExecutorBackend: Got assigned task 141510
20/09/29 20:26:55 INFO [Executor task launch worker for task 141510] Executor: Running task 545.1 in stage 26.0 (TID 141510)

(我已經修剪了堆棧跟蹤,因為那些只是 spark RDD shuffle 讀取方法)

如果我們檢查時間戳,然后關機開始在20/09/29 20:26:32 ,結束時為20/09/29 20:26:47 ,在此期間之間,驅動程序發送的所有重試的任務相同的執行和他們都失敗了,導致舞台取消。

有人可以幫我理解這種行為嗎? 如果還需要其他任何東西,請告訴我

spark中有一個配置,spark.task.maxFailures默認為4。所以spark在任務失敗時重試任務。 並且 Taskrunner 會將任務狀態更新到驅動程序。 並且驅動程序將此狀態轉發給taskschedulerimpl。 在您的情況下,只有執行程序 OOM 和 DiskBlockManager 關閉,但驅動程序還活着。 我認為執行者也還活着。 並將使用相同的 taskSetManager 重試任務。 此任務的失敗次數達到 4 次,此階段將被取消,執行者將被殺死。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM