简体   繁体   English

Tensorflow图是否可以在会话外运行

[英]Is it possible for Tensorflow graph to run outside of session

Could someone please explain the following situation: 有人可以解释以下情况:

I've created a simple convolutional neural network using Tensorflow. 我已经使用Tensorflow创建了一个简单的卷积神经网络。 I'm using a class and I've created my graph in the constructor. 我正在使用一个类,并在构造函数中创建了图形。 I then train the network using a train method I've written. 然后,我使用编写的训练方法来训练网络。 I'm also using queues and the feed-in mechanism. 我也在使用队列和馈入机制。 This is an excerpt from the code: 这是代码的节选:

 class Super_res:
'Create a CNN model which augments the resolution of an image'
# object initialization (python) - constructor
def __init__(self, input, output, batch_size, record_size, weights, biases):    # input (neurons), output (no. neurons), batch_size (batches to process before registering delta), record_size ()
    print("Initializing object")
    self.input = input
    self.output = output
    self.batch_size = batch_size
    self.record_size = record_size
    self.weights = weights
    self.biases = biases

    # initialize data batch readers. Parameters: [Path], record_size, batch_size
    self.data_batch = data_reader3.batch_generator([DATA_PATH_OPTICAL_TRAIN],self.record_size, self.batch_size) # train set
    self.data_batch_eval = data_reader3.batch_generator([DATA_PATH_EVAL],self.record_size, self.batch_size) # eval set

    # this returns a [batch_size, 2, n_input] tensor. The second dimension is comprised of the low-res image and the GT high-res image. Each of these images is comprised of n_input entries (flat vector)
    self.data1 = tf.placeholder_with_default(tf.transpose(self.data_batch, [1, 0, 2]), [2, batch_size, n_input]) # one for optical and another for GT image [batch_size, n_input] each

    self.keep_prob = tf.placeholder(tf.float32) #dropout (keep probability) - this placeholder can accept a Tensor of arbitrary shape

    # create network model
    self.pred = self.cnn_model(self.data1[0], self.weights,     self.biases)    # self.data1[0] is the low-res data 

 def train(self):
    #self.low_res = self.data1[0]
    #self.high_res = self.data1[1]

    # define loss and optimizer
    #self.cost = tf.reduce_mean(tf.pow(self.data1[1] - self.pred, 2))
    #self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(self.cost)

    # Initializing the variables
    init = tf.global_variables_initializer()

    # Initialize session
    with tf.Session() as sess:
        sess.run(init)
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)
        step = 1
        print("Entering training")

        # Keep  training until reach max iterations
        while step * batch_size < training_iters:
            #_, c = sess.run([self.optimizer, self.cost])
            conv_result = sess.run(self.pred)
            print(conv_result)
            #data2 = self.data1[0]
            #print(data2)

            if step % display_step == 0:
                print("Step:", '%04d' % (step+1))
            #   "cost=", c)
            step = step + 1

        coord.request_stop()
        coord.join(threads)

When I run this code, I get the following error output: 运行此代码时,得到以下错误输出:

 Entering training
 Traceback (most recent call last):
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages  \tensorflow\python\client\session.py", line 1139, in _do_call
return fn(*args)
File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 1121, in _run_fn
status, run_metadata)
File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.OutOfRangeError:    RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has   insufficient elements (requested 512, current size 0)
     [[Node: shuffle_batch = QueueDequeueManyV2[component_types=  [DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]  (shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
     [[Node: shuffle_batch/_25 = _Recv[client_terminated=false,   recv_device="/job:localhost/replica:0/task:0/gpu:0",   send_device="/job:localhost/replica:0/task:0/cpu:0",   send_device_incarnation=1, tensor_name="edge_5_shuffle_batch",    tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

 During handling of the above exception, another exception occurred:

 Traceback (most recent call last):
 File "super_res_class.py", line 137, in <module>
 p.train()
 File "super_res_class.py", line 106, in train
 conv_result = sess.run(self.pred)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages  \tensorflow\python\client\session.py", line 789, in run
 run_metadata_ptr)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 997, in _run
 feed_dict_string, options, run_metadata)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages  \tensorflow\python\client\session.py", line 1132, in _do_run
 target_list, options, run_metadata)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages  \tensorflow\python\client\session.py", line 1152, in _do_call
 raise type(e)(node_def, op, message)
 tensorflow.python.framework.errors_impl.OutOfRangeError:    RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 512, current size 0)
     [[Node: shuffle_batch = QueueDequeueManyV2[component_types=   [DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"] (shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
      [[Node: shuffle_batch/_25 = _Recv[client_terminated=false,     recv_device="/job:localhost/replica:0/task:0/gpu:0",    send_device="/job:localhost/replica:0/task:0/cpu:0",    send_device_incarnation=1, tensor_name="edge_5_shuffle_batch",     tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

Caused by op 'shuffle_batch', defined at:
File "super_res_class.py", line 136, in <module>
p = Super_res(1024,1024,512,record_size, weights, biases)   # params  (n_input, n_output, batch_size)
File "super_res_class.py", line 50, in __init__
self.data_batch =   data_reader3.batch_generator([DATA_PATH_OPTICAL_TRAIN],self.record_size, self.batch_size) # train set
 File "E:\google_drive\Doctorate\matlab code\Tensorflow\doctorate_CNN\dong_recreation\data_reader3.py", line 156, in batch_generator
 capacity=capacity, min_after_dequeue=min_after_dequeue)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\training\input.py", line 1217, in shuffle_batch
 name=name)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\training\input.py", line 788, in _shuffle_batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\ops\data_flow_ops.py", line 457, in dequeue_many
 self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\ops\gen_data_flow_ops.py", line 946, in _queue_dequeue_many_v2
 timeout_ms=timeout_ms, name=name)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
 File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in __init__
self._traceback = _extract_stack()

 OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 512, current size 0)
     [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
     [[Node: shuffle_batch/_25 = _Recv[client_terminated=false,   recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_5_shuffle_batch", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]


 (tensorflow_gpu) E:\google_drive\Doctorate\matlab code\Tensorflow\doctorate_CNN\dong_recreation>

When I remove the sess.run() from my pred output, the code seems to operate normally. 当我从pred输出中删除sess.run()时,代码似乎可以正常运行。

 def train(self):
    #self.low_res = self.data1[0]
    #self.high_res = self.data1[1]

    # define loss and optimizer
    #self.cost = tf.reduce_mean(tf.pow(self.data1[1] - self.pred, 2))
    #self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(self.cost)

    # Initializing the variables
    init = tf.global_variables_initializer()

    # Initialize session
    with tf.Session() as sess:
        sess.run(init)
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)
        step = 1
        print("Entering training")

        # Keep  training until reach max iterations
        while step * batch_size < training_iters:
            #_, c = sess.run([self.optimizer, self.cost])
            conv_result = self.pred
            print(conv_result)
            #data2 = self.data1[0]
            #print(data2)

            if step % display_step == 0:
                print("Step:", '%04d' % (step+1))
            #   "cost=", c)
            step = step + 1

        coord.request_stop()
        coord.join(threads)

Could someone please explain this to me? 有人可以向我解释一下吗? Normally, the graph is only evaluated when run under a session! 通常,仅在会话下运行时才评估图形! What gives here? 这里有什么?

Just saying conv_result = self.pred won't do anything -- you need, indeed, to do sess.run(self.pred) to get it to execute. 只是说conv_result = self.pred不会做任何事情-实际上,您需要执行sess.run(self.pred)使其执行。 The errors you're getting are something else about your model. 您得到的错误与模型有关。 As they say, your InputProducer has an empty queue. 正如他们所说,您的InputProducer队列为空。 With the information you've given it can't be diagnosed, but I would search further on the site for why your InputProducer isn't filling / has zero size. 使用您提供的信息无法诊断,但是我会在网站上进一步搜索为什么您的InputProducer无法填充/尺寸为零。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在tf.Graph()下定义的TensorFlow会话运行图 - TensorFlow session run graph defined outside under tf.Graph() TensorFlow:每个会话运行是否在图表中启动不同批次的数据? - TensorFlow: Does each session run initiate a different batch of data in a graph? Tensorflow 在每次调用 session.run() 时都会泄漏内存并带有最终图 - Tensorflow leaking memory at every call to session.run() with finalized graph tensorflow - 会话图为空 - tensorflow - session graph empty Tensorflow:在方法中使用会话/图表 - Tensorflow: using a session/graph in method 在Android上运行自己的TensorFlow模型会产生本机推理错误:“在Run()之前没有使用图形创建会话!” - Running own TensorFlow model on Android gives native inference error: “Session was not created with a graph before Run()!” Tensorflow:在类中创建图并在外部运行 - Tensorflow: Creating a graph in a class and running it outside Tensorflow session.run TypeError - Tensorflow session.run TypeError 在会话之外使用加载的 tensorflow 模型 - Using a loaded tensorflow model outside of session 是否可以在 TensorFlow 中仅在一次图形运行中有效地计算每个示例的梯度? - Is it possible to compute per-example gradients efficiently in TensorFlow, in just one graph run?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM