簡體   English   中英

Tensorflow多處理; UnknownError:無法啟動gRPC服務器

[英]Tensorflow Multoprocessing; UnknownError: Could not start gRPC server

我正在研究大型數據集上的粗麻布矩陣。 我正在嘗試在多個CPU上並行執行這些計算。 我的設置當前有1個節點和10個CPU。 我正在使用Python 2.7

我為代碼編寫了一個小抽象,以更好地理解分布式張量流。 下面是錯誤

2017-07-23 16:16:17.281414: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2225
Process Process-3:
Traceback (most recent call last):
  File "/home/skay/anaconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/skay/anaconda2/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "/home/skay/.PyCharmCE2017.1/config/scratches/scratch_6.py", line 32, in cifar10
    serv = tf.train.Server(cluster, job_name= params.job_name,task_index=params.task_index)
  File "/home/skay/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/server_lib.py", line 145, in __init__
    self._server_def.SerializeToString(), status)
  File "/home/skay/anaconda2/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/home/skay/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status)) UnknownError: Could not start gRPC server

每次運行代碼時,我都會收到此錯誤。 但是,它會進一步為我如下設置的兩個過程之一生成輸出

> `2017-07-23 16:27:48.605617: I tensorflow/core/distributed_runtime/master_session.cc:999] Start master session fe9fd6a338e2c9a7 with config: 

2017-07-23 16:27:48.607126: I tensorflow/core/distributed_runtime/master_session.cc:999] Start master session 3560417f98b00dea with config: 

[  1.   2.   3.   4.   5.   6.   7.   8.   9.  10.]
Process-3
[  1.   2.   3.   4.   5.   6.   7.   8.   9.  10.]
Process-3
[  1.   2.   3.   4.   5.   6.   7.   8.   9.  10.]
Process-3

在此它繼續等待下一個

ERROR:tensorflow:==================================
Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>):
<tf.Operation 'worker_0/init' type=NoOp>
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
['File "/home/skay/.PyCharmCE2017.1/config/scratches/scratch_6.py", line 83, in <module>\n    proc.start()', 'File "/home/skay/anaconda2/lib/python2.7/multiprocessing/process.py", line 130, in start\n    self._popen = Popen(self)', 'File "/home/skay/anaconda2/lib/python2.7/multiprocessing/forking.py", line 126, in __init__\n    code = process_obj._bootstrap()', 'File "/home/skay/anaconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap\n    self.run()', 'File "/home/skay/anaconda2/lib/python2.7/multiprocessing/process.py", line 114, in run\n    self._target(*self._args, **self._kwargs)', 'File "/home/skay/.PyCharmCE2017.1/config/scratches/scratch_6.py", line 49, in cifar10\n    init_op=tf.initialize_all_variables(),logdir=\'/tmp/mydir\')', 'File "/home/skay/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n    return _add_should_use_warning(fn(*args, **kwargs))', 'File "/home/skay/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n    wrapped = TFShouldUseWarningWrapper(x)', 'File "/home/skay/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n    stack = [s.strip() for s in traceback.format_stack()]']
==================================
2017-07-23 16:28:28.646871: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:worker/replica:0/task:0
2017-07-23 16:28:38.647276: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:worker/replica:0/task:0
2017-07-23 16:28:48.647526: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:worker/replica: 

我在這里有2個問題

  1. 如何解決有關Grpc的錯誤
  2. 我已經使用Manager()設置了一個多處理隊列“結果”,並在設置流程時將其傳遞給兩個工作人員。 我希望一旦達到條件,每個進程就會將其Job ID寫入隊列,但是似乎隊列始終包含最后完成的進程。 這是否意味着隊列中的某個位置被另一個進程覆蓋

[{'worker':0},{'worker':0}]

我可以使用多處理隊列在tensorflow上兩個不同進程上運行的兩個會話之間共享字典嗎?

下面是我的代碼

# build a python mutliprocess.py
import multiprocessing
import time
import tensorflow as tf
from tensorflow.contrib.training import HParams
import os
import psutil
import numpy as np
from tensorflow.python.client import device_lib
from resnet import *
import Queue

cluster_spec ={"ps": ["localhost:2226"
                      ],
    "worker": [
        "localhost:2227",
        "localhost:2228"]}

cluster = tf.train.ClusterSpec(cluster_spec)
im_Test = np.linspace(1,10,10)

def model_fun(input):
    print multiprocessing.current_process().name
    return input

def cifar10(device,return_dict,result_t):
    params = HParams(cluster=cluster,
                     job_name = device[0],
                     task_index = device[1])

    serv = tf.train.Server(cluster, job_name= params.job_name,task_index=params.task_index)
    input_img=[]
    true_lab=[]

    if params.job_name == "ps":
        ##try and wait for all the wokers t
        serv.join()
    elif params.job_name == "worker":
        with tf.device(tf.train.replica_device_setter(worker_device="/job:worker/replica:0/task:%d" % params.task_index,
                                                      cluster=cluster)):
            # with tf.Graph().as_default(), tf.device('/cpu:%d' % params.task_index):
            # with tf.container('%s %d' % ('batchname', params.task_index)) as scope:
            input_img = tf.placeholder(dtype=tf.float32, shape=[10,])
            with tf.name_scope('%s_%d' % (params.job_name, params.task_index)) as scope:
                hess_op = model_fun(input_img)
                global_step = tf.contrib.framework.get_or_create_global_step()
                sv = tf.train.Supervisor(is_chief=(params.task_index == 0),
                                         global_step=global_step,
                                         init_op=tf.initialize_all_variables(),logdir='/tmp/mydir')
                with sv.prepare_or_wait_for_session(serv.target) as sess:
                    step = 0
                    while not sv.should_stop() :
                        hess = sess.run(hess_op, feed_dict={input_img:im_Test })
                        print(np.array(hess))
                        print multiprocessing.current_process().name
                        step += 1
                        if(step==3):
                            return_dict[params.job_name] = params.task_index
                            result_t.put(return_dict)
                            break
                    sv.stop()
                    sess.close()


    return

if __name__ == '__main__':

    logger = multiprocessing.log_to_stderr()
    manager = multiprocessing.Manager()
    result = manager.Queue()
    return_dict = manager.dict()
    processes = []
    devices = [['ps', 0],
               ['worker', 0],
               ['worker', 1]
               ]

    for i in (devices):
        start_time = time.time()
        proc = multiprocessing.Process(target=cifar10,args=(i,return_dict,result))
        processes.append(proc)
        proc.start()

    for p in processes:
        p.join()

    # print return_dict.values()
    kill = []
    while True:
        if result.empty() == True:
                break
        kill.append(result.get())
        print kill


    print("time taken = %d" % (start_time - time.time()))

就我而言,我發現ps會引發此錯誤,並且當我提交tensorflowonspark工作紗線簇模式時,woker等待響應。

ps錯誤如下

2018-01-17 11:08:46,366 INFO(MainThread-7305)在后台進程上的群集節點0上啟動TensorFlow ps:0 2018-01-17 11:08:56,085 INFO(MainThread-7395)0:==== ==== ps:0 ========= 2018年1月17日11:08:56,086信息(MainThread-7395)0:群集規格:{'ps':['172.16.5.30:33088'] ,'worker':['172.16.5.22:41428','172.16.5.30:33595']} 2018-01-17 11:08:56,086 INFO(MainThread-7395)0:使用CPU 2018-01-17 11: 08:56.087452:I tensorflow / core / platform / cpu_feature_guard.cc:137]您的CPU支持該TensorFlow二進制文件未編譯為使用的指令:SSE4.1 SSE4.2 AVX AVX2 FMA E0117 11:08:56.088501182 7395 ev_epoll1_linux.c :1051] grpc epoll fd:10 E0117 11:08:56.088860707 7395 server_chttp2.c:38] {“ created”:“ @ 1516158536.088783549”,“ description”:“未在總共1個解析的地址中添加地址”,“ file”: “ external / grpc / src / core / ext / transport / chttp2 / server / chttp2_server.c”,“ file_line”:245,“ referenced_errors”:[{“ created”:“ @ 1516158536.088779164”,“ description”:“失敗添加任何通配符偵聽器”, “ file”:“ external / grpc / src / core / lib / iomgr / tcp_server_posix.c”,“ file_line”:338,“ referenced_errors”:[{“ created”:“ @ 1516158536.088771177”,“ description”:“無法配置套接字“,” fd“:12,”文件“:” external / grpc / src / core / lib / iomgr / tcp_server_utils_posix_common.c“,” file_line“:200,” referenced_errors“:[{” created“:” @ 1516158536.088767669“,”描述“:” OS錯誤“,” errno“:98,”文件“:” external / grpc / src / core / lib / iomgr / tcp_server_utils_posix_common.c“,” file_line“:173,” os_error“: “地址已在使用中”,“ syscall”:“綁定”}]}},{“創建”:“ @ 1516158536.088778651”,“描述”:“無法配置套接字”,“ fd”:12,“文件”:“ external / grpc / src / core / lib / iomgr / tcp_server_utils_posix_common.c“,” file_line“:200,” referenced_errors“:[{” created“:” @ 1516158536.088776541“,” description“:” OS Error“,” errno“ :98,“文件”:“外部/ grpc / src /核心/lib/iomgr/tcp_server_utils_posix_common.c”,“文件行”:173,“ os_error”:“地址已在使用中”,“ syscall”:“綁定”} ]}]}]}處理流程2:追溯(最近一次呼叫最近):文件“ / data / yarn /nm/usercache/hdfs/appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000002/Python/lib/python2.7/multiprocessing/process.py“,第258行,在_bootstrap self.run()文件中,“ / data / yarn / nm / usercache / hdfs /appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000002/Python/lib/python2.7/multiprocessing/process.py“,行114,運行self._target(* self._args,** self._kwargs)文件” / data / yarn / nm /usercache/hdfs/appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000001/tfspark.zip/tensorflowonspark/TFSparkNode.py “線路269,在wrapper_fn文件”/數據/紗線/納米/ usercache / HDFS /應用程序緩存/ application_1515984940590_0270 / container_e13_1515984940590_0270_01_000002 / pyfiles / mnist_dist。 py“,第38行,在map_fun集群中,服務器= ctx.start_cluster_server(1,args.rdma)文件” /data/yarn/nm/usercache/hdfs/appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000002/tfspark.py/Tensorparkons.py “,第56行,在 start_cluster_server返回TFNode.start_cluster_server(自身,num_gpus,rdma)文件“ /data/yarn/nm/usercache/hdfs/appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000002/tfspark.zip/tensorflowons_park,ster_server,lu/tc_server_lu,server_lu,服務器端。 .train.Server(群集,ctx.job_name,ctx.task_index)文件“ /data/yarn/nm/usercache/hdfs/appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000002/Python/lib/python2.7/site-packages/tensorflow/python/ init self._server_def.SerializeToString(),狀態)第145行中的“ training / server_lib.py”,狀態)文件“ /data/yarn/nm/usercache/hdfs/appcache/application_1515984940590_0270/container_e13_1515984940590_0270_01_000002/Python/lib/python2.7/site -packages / tensorflow / python / framework / errors_impl.py“,行473,在出口 c_api.TF_GetCode(self.status.status))中,UnknownError:無法啟動gRPC服務器

woker:1 log

2018-01-17 11:09:14.614244:I tensorflow / core / distributed_runtime / master.cc:221] CreateSession仍在等待工作者的響應:/ job:ps /副本:0 / task:0

然后,我檢查ps服務器中的端口。 是的,該端口已被使用。

因此,重新提交工作即可解決問題。

但是,如果您每次運行代碼都收到此錯誤,則應檢查更多日志以查找原因。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM