[英]How to use Distributed Tensorflow on remote machines?
我正在嘗試在三台機器上運行分布式Tensorflow腳本:我的本地機器運行參數服務器,兩台遠程機器可以訪問正在運行的工作作業。 我正在遵循Tensorflow文檔中的示例 ,將IP地址和唯一端口號傳遞給每個輔助作業,並將tf.train.Server
的protocol
選項設置為'grpc'
。 但是,當我運行腳本時,所有三個進程都在本地主機上啟動,並且所有作業都不在遠程計算機上。 我缺少步驟了嗎?
我的(刪節)代碼:
# Define flags
tf.app.flags.DEFINE_string("ps_hosts", "localhost:2223",
"comma-separated list of hostname:port pairs")
tf.app.flags.DEFINE_string("worker_hosts",
"server1.com:2224,server2.com:2225",
"comma-separated list of hostname:port pairs")
tf.app.flags.DEFINE_string("job_name", "worker", "One of 'ps', 'worker'")
tf.app.flags.DEFINE_integer("task_index", 0, "Index of task within the job")
ps_hosts = FLAGS.ps_hosts.split(",")
worker_hosts = FLAGS.worker_hosts.split(",")
cluster = tf.train.ClusterSpec({"ps": ps_hosts, "worker": worker_hosts})
server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index, protocol='grpc')
if FLAGS.job_name == "ps":
server.join()
elif FLAGS.job_name == "worker":
# Between-graph replication
with tf.device(tf.train.replica_device_setter(cluster=cluster, worker_device="/job:worker/task:{}".format(FLAGS.task_index))):
# Create model...
sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0),
logdir="./checkpoint",
init_op=init_op,
summary_op=summary,
saver=saver,
global_step=global_step,
save_model_secs=600)
with sv.managed_session(server.target,
config=config_proto) as sess:
# Train model...
此代碼導致兩個問題:
從worker0:
2018-04-09 23:48:39.749679: I tensorflow/core/distributed_runtime/master.cc:221] CreateSession still waiting for response from worker: /job:worker/replica:0/task:1
從worker1:
2018-04-09 23:49:30.439166: I tensorflow/core/distributed_runtime/master.cc:221] CreateSession still waiting for response from worker: /job:worker/replica:0/task:0
device_filter
擺脫以前的問題,但是所有作業都在本地計算機上啟動,而不是在遠程服務器上啟動。 如何使兩個輔助作業在遠程服務器上運行?
我的理解是,您必須在群集的所有主機上運行此腳本。 用
參數服務器上的“ --job_name = ps”參數和
在工作人員上顯示“ --job_name = worker --task_index = [0,1]”。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.