[英]Spark - Add Worker from Local Machine (standalone spark cluster manager)?
When running spark 1.4.0 in a single machine, I can add worker by using this command "./bin/spark-class org.apache.spark.deploy.worker.Worker myhostname:7077". 在单台计算机上运行spark 1.4.0时,我可以使用以下命令“ ./bin/spark-class org.apache.spark.deploy.worker.Worker myhostname:7077”添加worker。 The official documentation points out another way by adding "myhostname:7077" to the "conf/slaves" file followed by executing the command "sbin/start-all.sh" which invoke the master and all workers listed in conf/slaves file.
官方文档指出了另一种方法,在“ conf / slaves”文件中添加“ myhostname:7077”,然后执行命令“ sbin / start-all.sh”,该命令将调用conf / slaves文件中列出的master和所有worker。 However, the later method doesn't work for me (with time-out error).
但是,后一种方法对我不起作用(存在超时错误)。 Can anyone help me with this?
谁能帮我这个?
Here is my conf/slaves file (assume the master URL is myhostname:700): 这是我的conf / slaves文件(假设主URL是myhostname:700):
myhostname:700 myhostname:700
conf.slaves文件应该只是主机名列表,您不需要包括spark所在的端口号(我认为如果这样做,它将尝试在该端口上使用ssh,这可能是超时的来源) )。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.