简体   繁体   English

在哪里设置“ spark.yarn.executor.memoryOverhead”

[英]Where to set “spark.yarn.executor.memoryOverhead”

I am getting following error while running my spark-scala program. 运行我的spark-scala程序时出现以下错误。

YarnSchedulerBackends$YarnSchedulerEndpoint: Container killed by YARN for exceeding memory limits. YarnSchedulerBackends $ YarnSchedulerEndpoint:容器因超出内存限制而被YARN杀死。 2.6GB of 2.5GB physical memory used. 使用了2.6GB的2.5GB物理内存。 Consider boosting spark.yarn.executor.memoryOverhead. 考虑提高spark.yarn.executor.memoryOverhead。

I have set spark.yarn.executor.memoryOverhead in the program while creating SparkSession. 创建SparkSession时,已在程序中设置了spark.yarn.executor.memoryOverhead。

My question is - is it ok to set "spark.yarn.executor.memoryOverhead" while creating SparkSession or should it be passed during runtime with spark-submit? 我的问题是-创建SparkSession时可以设置“ spark.yarn.executor.memoryOverhead”还是应该在运行时通过spark-submit传递它?

You have to set the spark.yarn.executor.memoryOverhead at the time of sparkSession creation. 您必须在创建spark.yarn.executor.memoryOverhead时设置spark.yarn.executor.memoryOverhead。 This parameter is used as amount of off-heap memory (in megabytes) to be allocated per executor. 此参数用作要为每个执行程序分配的堆外内存量(以兆字节为单位)。 This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%). 这是一种内存,用于解决VM开销,内部字符串,其他本机开销等问题。随着执行程序大小的增加(通常为6%至10%),内存通常会增加。

Now this allocation can only be done at the time of allocation of the executor not at the runtime. 现在,该分配只能在分配执行程序时进行,而不能在运行时进行。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 AWS Glue - 无法设置 spark.yarn.executor.memoryOverhead - AWS Glue - can't set spark.yarn.executor.memoryOverhead “spark.yarn.executor.memoryOverhead”设置的值? - The value of “spark.yarn.executor.memoryOverhead” setting? 提升spark.yarn.executor.memoryOverhead - Boosting spark.yarn.executor.memoryOverhead 为什么增加spark.yarn.executor.memoryOverhead? - Why increase spark.yarn.executor.memoryOverhead? 了解spark.yarn.executor.memoryOverhead - Understanding spark.yarn.executor.memoryOverhead Spark:如何在 spark-submit 中设置 spark.yarn.executor.memoryOverhead 属性 - Spark: How to set spark.yarn.executor.memoryOverhead property in spark-submit 总是给参数 spark.yarn.executor.memoryOverhead 好吗? - is it good to always give parameter spark.yarn.executor.memoryOverhead? spark.yarn.driver.memoryOverhead或spark.yarn.executor.memoryOverhead用于存储什么样的数据? - the spark.yarn.driver.memoryOverhead or spark.yarn.executor.memoryOverhead is used to store what kind of data? “spark.yarn.executor.memoryOverhead”和“spark.memory.offHeap.size”之间的区别 - Difference between “spark.yarn.executor.memoryOverhead” and “spark.memory.offHeap.size” 由于超过内存限制而被YARN杀死的容器。 使用52.6 GB的50 GB物理内存。 考虑提升spark.yarn.executor.memoryOverhead - Container killed by YARN for exceeding memory limits. 52.6 GB of 50 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM