[英]Does Yarn allocates one container for the application master from the number of executors that we pass in our spark-submit command
Lets assume that I am submitting a Spark application in yarn-client mode.假设我正在以 yarn-client 模式提交 Spark 应用程序。 In Spark submit I am passing the --num-executors as 10. When the client submits this spark application to resourceManager,
在 Spark submit 中,我将 --num-executors 传递为 10。当客户端将此 spark 应用程序提交到 resourceManager 时,
Does resource manager allocate one executor container for Application master process from the --num-executors(10) and teh rest 9 will be given for actual executors?资源管理器是否从 --num-executors(10) 为应用程序主进程分配一个执行器容器,并且将为实际执行器提供 rest 9?
or或者
Does it allocate one new container for application master or give 10 containers for executors alone?它是为 application master 分配一个新的容器还是为 executor 单独分配 10 个容器?
--num-executors
is to request that number of executors from a cluster manager (that may also be Hadoop YARN). --num-executors
是从集群管理器(也可能是 Hadoop YARN)请求执行器的数量。 That's Spark's requirement.这是 Spark 的要求。
An application master (of a YARN application) is just a thing of YARN. (YARN 应用程序的)应用程序主机只是 YARN 的一个东西。
It may happen that a Spark application can also be a YARN application. Spark 应用程序也可能是 YARN 应用程序。 In such case, the Spark application gets 10 containers and one extra container for the AM.
在这种情况下,Spark 应用程序为 AM 获得 10 个容器和一个额外的容器。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.