[英]The value of “spark.yarn.executor.memoryOverhead” setting?
具有YARN的Spark作业中spark.yarn.executor.memoryOverhead
的值应该分配给App还是仅分配给最大值?
spark.yarn.executor.memoryOverhead
Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames 只是最大值。目标是将OVERHEAD计算为实际执行程序内存的百分比,如RDD和DataFrames所使用的那样
--executor-memory/spark.executor.memory
controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers. 控制执行程序堆大小,但JVM也可以使用堆内存,例如对于实例化的字符串和直接字节缓冲区。
The value of the spark.yarn.executor.memoryOverhead
property is added to the executor memory to determine the full memory request to YARN for each executor. spark.yarn.executor.memoryOverhead
属性的值将添加到执行程序内存中,以确定每个执行程序对YARN的完整内存请求。 It defaults to max(executorMemory * 0.10, with minimum of 384). 默认为max(executorMemory * 0.10,最小值为384)。
The executors will use a memory allocation based on the property of spark.executor.memory
plus an overhead defined by spark.yarn.executor.memoryOverhead
执行人将使用基于财产内存分配
spark.executor.memory
通过定义加上开销spark.yarn.executor.memoryOverhead
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.