简体   繁体   中英

Spark Worker and Executors Cores

I have a Spark Cluster running in YARN mode on top of HDFS. I launched one worker with 2 cores and 2g of memory. Then I submitted a job with dynamic configuration of 1 executor with 3 cores. Still, my job is able to run. Can somebody explain the difference between the number of cores with which the worker is launched and the ones requested for the executors. My understanding was since the executors run inside the workers they cannot acquire more resources than those available for the worker.

Check for parameter yarn.nodemanager.resource.cpu-vcores in yarn-site.xml.

yarn.nodemanager.resource.cpu-vcores controls the maximum sum of cores used by the containers on each node.

->Spark launches n number of executors inside worker nodes. ->Spark uses number of cores and executor-memory parameter for launching executors at time of application submit to spark cluster. ->In spark submit we can not specify number of cores for a worker node.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM