簡體   English   中英

EMR上的pyspark,是否應設置spark.executor.pyspark.memory和executor.memory?

[英]pyspark on EMR, should spark.executor.pyspark.memory and executor.memory be set?

我正在進行繁重的重新分區工作(由於數據量大而不是因為火花在做什么),我一直在經歷各種內存錯誤。

我是剛剛起步的人,經過幾番研究,我終於找到了一種使執行速度非常快的方法,但是如果我要重新分區的表太大,則會遇到另一個內存錯誤。

對於9 r3.8xlarge,我目前在EMR中對其進行配置的方式如下:

--executor-cores 11 --executor-memory 180G

我的問題是,是否還應設置--conf spark.executor.pyspark.memory 如果是,取哪個值? 它應該與執行者內存相同嗎?

我不能肯定以下內容,但是我有一種感覺,當將兩者並置為相同值時,它會因Java堆錯誤而崩潰(因此,我認為它試圖提供過多的RAM)

正如評論所問的那樣,我從EMR遇到的最新錯誤是:

diagnostics: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1564757608574
     final status: FAILED
     tracking URL: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004
     user: hadoop
19/08/03 09:44:57 ERROR Client: Application diagnostics message: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1564657600123_0004 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1148)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1525)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/08/03 09:44:57 INFO ShutdownHookManager: Shutdown hook called
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-245f0132-a6e5-4a6d-874f-a71942b1636f
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-db12c471-c7d5-4c86-8cda-bf3246ffb860
Command exiting with ret '1'

嘗試增加內存,以下是pyspark腳本中的配置示例。 您可以像這樣調整您的記憶。

    conf = SparkConf()
    conf.set('spark.dynamicAllocation.enabled', 'false')
    conf.set('spark.yarn.am.memory', '4g')  # As the log showing, you need to increase your AM memory.
    conf.set('spark.yarn.am.cores', '2')
    conf.set('spark.executor.memoryOverhead', '1200')  # The amount of off-heap memory (in megabytes) to be allocated per executor. Overhead Memory is used by container itself.
    conf.set('spark.executor.memory', '2500m')  # memory * instances should less than Node total memory
    conf.set('spark.executor.cores', '4')  # --executor-cores
    conf.set('spark.executor.instances', '8')  # --num-executors

順便說一句,重新分區是一個繁重的運算符。 如果需要,您可以不打亂就使用煤炭。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM