简体   繁体   English

“spark.yarn.executor.memoryOverhead”设置的值?

[英]The value of “spark.yarn.executor.memoryOverhead” setting?

具有YARN的Spark作业中spark.yarn.executor.memoryOverhead的值应该分配给App还是仅分配给最大值?

spark.yarn.executor.memoryOverhead

Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames 只是最大值。目标是将OVERHEAD计算为实际执行程序内存的百分比,如RDD和DataFrames所使用的那样

--executor-memory/spark.executor.memory

controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers. 控制执行程序堆大小,但JVM也可以使用堆内存,例如对于实例化的字符串和直接字节缓冲区。

The value of the spark.yarn.executor.memoryOverhead property is added to the executor memory to determine the full memory request to YARN for each executor. spark.yarn.executor.memoryOverhead属性的值将添加到执行程序内存中,以确定每个执行程序对YARN的完整内存请求。 It defaults to max(executorMemory * 0.10, with minimum of 384). 默认为max(executorMemory * 0.10,最小值为384)。

The executors will use a memory allocation based on the property of spark.executor.memory plus an overhead defined by spark.yarn.executor.memoryOverhead 执行人将使用基于财产内存分配spark.executor.memory通过定义加上开销spark.yarn.executor.memoryOverhead

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 提升spark.yarn.executor.memoryOverhead - Boosting spark.yarn.executor.memoryOverhead 为什么增加spark.yarn.executor.memoryOverhead? - Why increase spark.yarn.executor.memoryOverhead? 了解spark.yarn.executor.memoryOverhead - Understanding spark.yarn.executor.memoryOverhead 在哪里设置“ spark.yarn.executor.memoryOverhead” - Where to set “spark.yarn.executor.memoryOverhead” AWS Glue - 无法设置 spark.yarn.executor.memoryOverhead - AWS Glue - can't set spark.yarn.executor.memoryOverhead 总是给参数 spark.yarn.executor.memoryOverhead 好吗? - is it good to always give parameter spark.yarn.executor.memoryOverhead? spark.yarn.driver.memoryOverhead或spark.yarn.executor.memoryOverhead用于存储什么样的数据? - the spark.yarn.driver.memoryOverhead or spark.yarn.executor.memoryOverhead is used to store what kind of data? Spark:如何在 spark-submit 中设置 spark.yarn.executor.memoryOverhead 属性 - Spark: How to set spark.yarn.executor.memoryOverhead property in spark-submit “spark.yarn.executor.memoryOverhead”和“spark.memory.offHeap.size”之间的区别 - Difference between “spark.yarn.executor.memoryOverhead” and “spark.memory.offHeap.size” 由于超过内存限制而被YARN杀死的容器。 使用52.6 GB的50 GB物理内存。 考虑提升spark.yarn.executor.memoryOverhead - Container killed by YARN for exceeding memory limits. 52.6 GB of 50 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM