简体   繁体   中英

How are Spark Executors launched if Spark (on YARN) is not installed on the worker nodes?

I have a question regarding Apache Spark running on YARN in cluster mode. According to this thread , Spark itself does not have to be installed on every (worker) node in the cluster. My problem is with the Spark Executors: In general, YARN or rather the Resource Manager is supposed to decide about resource allocation. Hence, Spark Executors could be launched randomly on any (worker) node in the cluster. But then, how can Spark Executors be launched by YARN if Spark is not installed on any (worker) node?

In a high level, When Spark application launched on YARN,

  1. An Application Master( Spark specific ) will be created in one of the YARN Container.
  2. Other YARN Containers used for Spark workers(Executors)

Spark driver will pass serialized actions(code) to executors to process data.

spark-assembly provides spark related jars to run Spark jobs on a YARN cluster and application will have its own functional related jars.


Edit: (2017-01-04)

Spark 2.0 no longer requires a fat assembly jar for production deployment. source

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM