简体   繁体   中英

Spark - Add Worker from Local Machine (standalone spark cluster manager)?

When running spark 1.4.0 in a single machine, I can add worker by using this command "./bin/spark-class org.apache.spark.deploy.worker.Worker myhostname:7077". The official documentation points out another way by adding "myhostname:7077" to the "conf/slaves" file followed by executing the command "sbin/start-all.sh" which invoke the master and all workers listed in conf/slaves file. However, the later method doesn't work for me (with time-out error). Can anyone help me with this?

Here is my conf/slaves file (assume the master URL is myhostname:700):

myhostname:700

conf.slaves文件应该只是主机名列表,您不需要包括spark所在的端口号(我认为如果这样做,它将尝试在该端口上使用ssh,这可能是超时的来源) )。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM