簡體   English   中英

許多死亡執行者 EMR

[英]Many Dead Executors EMR

我正在嘗試通過創建一個步驟 Application Spark 在 AWS EMR 集群上執行我的 spark scala 應用程序。

我的集群包含 4 m3.xlarge

我使用以下命令啟動我的應用程序:

spark-submit --deploy-mode cluster --class Main s3://mybucket/myjar_2.11-0.1.jar s3n://oc-mybucket/folder arg1 arg2

我的應用程序有 3 個參數,第一個是文件夾。

不幸的是,在啟動應用程序后,我看到只有一個 Executor(+master)處於活動狀態,而我有 3 個 Executor 死了,所以所有任務都只在第一個執行。 看圖片

在此處輸入圖像描述

我嘗試了很多方法來激活那些執行器,但沒有任何結果(“spark.default.parallelism”、““spark.executor.instances”和“spark.executor.cores”)。 我應該怎么做才能讓所有執行者都處於活動狀態並處理數據?

另外,當查看 Ganglia 時,我的 cpu 總是低於 35%,有沒有辦法讓 cpu 工作超過 75%?

謝謝你

UPDTAE

這是死執行者的標准錯誤內容

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/14/__spark_libs__3671437061469038073.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/08/15 23:28:56 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 14765@ip-172-31-39-255
20/08/15 23:28:56 INFO SignalUtils: Registered signal handler for TERM
20/08/15 23:28:56 INFO SignalUtils: Registered signal handler for HUP
20/08/15 23:28:56 INFO SignalUtils: Registered signal handler for INT
20/08/15 23:28:57 INFO SecurityManager: Changing view acls to: yarn,hadoop
20/08/15 23:28:57 INFO SecurityManager: Changing modify acls to: yarn,hadoop
20/08/15 23:28:57 INFO SecurityManager: Changing view acls groups to: 
20/08/15 23:28:57 INFO SecurityManager: Changing modify acls groups to: 
20/08/15 23:28:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, hadoop); groups with view permissions: Set(); users  with modify permissions: Set(yarn, hadoop); groups with modify permissions: Set()
20/08/15 23:28:58 INFO TransportClientFactory: Successfully created connection to ip-172-31-36-83.eu-west-1.compute.internal/172.31.36.83:37115 after 186 ms (0 ms spent in bootstraps)
20/08/15 23:28:58 INFO SecurityManager: Changing view acls to: yarn,hadoop
20/08/15 23:28:58 INFO SecurityManager: Changing modify acls to: yarn,hadoop
20/08/15 23:28:58 INFO SecurityManager: Changing view acls groups to: 
20/08/15 23:28:58 INFO SecurityManager: Changing modify acls groups to: 
20/08/15 23:28:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, hadoop); groups with view permissions: Set(); users  with modify permissions: Set(yarn, hadoop); groups with modify permissions: Set()
20/08/15 23:28:58 INFO TransportClientFactory: Successfully created connection to ip-172-31-36-83.eu-west-1.compute.internal/172.31.36.83:37115 after 2 ms (0 ms spent in bootstraps)
20/08/15 23:28:58 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1597532473783_0002/blockmgr-d0d258ba-4345-45d1-9279-f6a97b63f81c
20/08/15 23:28:58 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1597532473783_0002/blockmgr-e7ae1e29-85fa-4df9-acf1-f9923f0664bc
20/08/15 23:28:58 INFO MemoryStore: MemoryStore started with capacity 2.6 GB
20/08/15 23:28:59 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@ip-172-31-36-83.eu-west-1.compute.internal:37115
20/08/15 23:28:59 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
20/08/15 23:28:59 INFO Executor: Starting executor ID 3 on host ip-172-31-39-255.eu-west-1.compute.internal
20/08/15 23:28:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40501.
20/08/15 23:28:59 INFO NettyBlockTransferService: Server created on ip-172-31-39-255.eu-west-1.compute.internal:40501
20/08/15 23:28:59 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/08/15 23:29:00 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(3, ip-172-31-39-255.eu-west-1.compute.internal, 40501, None)
20/08/15 23:29:00 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(3, ip-172-31-39-255.eu-west-1.compute.internal, 40501, None)
20/08/15 23:29:00 INFO BlockManager: external shuffle service port = 7337
20/08/15 23:29:00 INFO BlockManager: Registering executor with local external shuffle service.
20/08/15 23:29:00 INFO TransportClientFactory: Successfully created connection to ip-172-31-39-255.eu-west-1.compute.internal/172.31.39.255:7337 after 20 ms (0 ms spent in bootstraps)
20/08/15 23:29:00 INFO BlockManager: Initialized BlockManager: BlockManagerId(3, ip-172-31-39-255.eu-west-1.compute.internal, 40501, None)
20/08/15 23:29:03 INFO CoarseGrainedExecutorBackend: eagerFSInit: Eagerly initialized FileSystem at s3://does/not/exist in 3363 ms
20/08/15 23:30:02 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
20/08/15 23:30:02 INFO DiskBlockManager: Shutdown hook called
20/08/15 23:30:02 INFO ShutdownHookManager: Shutdown hook called

這個問題必須與memory有關嗎?

spark-submit默認不使用所有的執行器,你可以指定執行器的數量--num-executorsexecutor-coreexecutor-memory

例如,增加執行者(默認為2個)

spark-submit --num-executors N   #where N is desired number of executors like 5,10,50

此處查看文檔中的示例

如果它對 spark-submit 沒有幫助或覆蓋,您可以覆蓋conf/spark-defaults.conf文件或類似文件中的spark.executor.instances ,這樣您就不必在命令行上明確指定它

對於 CPU 利用率,您應該查看executor-coreexecutor-core並在 spark-submit 或 conf 中更改它們。 增加 CPU 核心數有望增加使用率。

更新

正如@Lamanus 所指出的,我仔細檢查過,大於 4.4 的 emr 將spark.dynamicAllocation.enabled設置為true ,我建議您仔細檢查數據的分區,因為啟用動態分配后,執行程序實例的數量取決於數量分區數,根據 DAG 執行的階段而有所不同。 此外,通過動態分配,您可以嘗試spark.dynamicAllocation.initialExecutorsspark.dynamicAllocation.maxExecutorsspark.dynamicAllocation.maxExecutors來控制執行器。

這可能有點晚了,但我發現這個 AWS 大數據博客很有見地,可以確保我的大部分集群都得到利用,並且我能夠實現盡可能多的並行性。

https://aws.amazon.com/blogs/big-data/best-practices-for-successfully-managing-memory-for-apache-spark-applications-on-amazon-emr/

進一步來說:

每個實例的執行器數量 =(每個實例的虛擬核心總數 - 1)/ spark.executors.cores

總執行器 memory = 每個實例的總 RAM / 每個實例的執行器數量

然后,您可以使用spark.default.parallelismrepartitioning控制階段期間並行任務的數量。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM