简体   繁体   中英

The value of “spark.yarn.executor.memoryOverhead” setting?

具有YARN的Spark作业中spark.yarn.executor.memoryOverhead的值应该分配给App还是仅分配给最大值?

spark.yarn.executor.memoryOverhead

Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames

--executor-memory/spark.executor.memory

controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers.

The value of the spark.yarn.executor.memoryOverhead property is added to the executor memory to determine the full memory request to YARN for each executor. It defaults to max(executorMemory * 0.10, with minimum of 384).

The executors will use a memory allocation based on the property of spark.executor.memory plus an overhead defined by spark.yarn.executor.memoryOverhead

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM