简体   繁体   中英

Why does the Tensorflow tf.FIFOQueue close early in the following code?

I am trying to implement a queue that has an enqueue running in the background and a dequeue running in the main thread.

The goal is to run an optimizer in loop that depends on a value stored in a buffer and only changes with each step in the optimization. Here is a simple example to illustrate:

VarType = tf.int32

data0 = np.array([1.0])

init = tf.placeholder(VarType, [1])
q = tf.FIFOQueue(capacity=1, shapes=[1], dtypes=VarType)
nq_init = q.enqueue(init)
# I use a Variable intermediary because I will want to access the
# data multiple times, but I do not want the next data point in the
# queue until I initialize the variable again.
data_ = tf.Variable(q.dequeue(), trainable=False, collections=[])

# Notice that data_ is accessed twice, but should be the same
# in a single sess.run
# so "data_ = q.dequeue()" would not be correct
# plus there needs to be access to initial data
data1 = data_ + 1
data2 = data_ * data1
qr = tf.train.QueueRunner(q, [q.enqueue(data2)] * 1)
tf.train.add_queue_runner(qr)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    sess.run(nq_init, feed_dict={init:data0})
    # this first initialization works fine
    sess.run(data_.initializer)
    for n in range(10):
        print(sess.run(data2))
        # this second initialization errors out: 
        sess.run(data_.initializer)

    coord.request_stop()
    coord.join(threads)

print('Done')

This piece of code errors out, with the following error:

"OutOfRangeError (see above for traceback): FIFOQueue '_0_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)"

Why, and how is this fixed?

So I found the "how to fix it" part, but not the why.

It seems that the first enqueue/dequeue must be run before the second enqueue/dequeue is placed in the QUEUE_RUNNERS collection - but with a caveat, we need to run sess.run(data_.initializer) twice:

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    sess.run(nq_init, feed_dict={init:data0})
    sess.run(data_.initializer)
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    for n in range(10):
        print(sess.run(data2))
        sess.run(data_.initializer)
        sess.run(data_.initializer)

    coord.request_stop()
    coord.join(threads)

Output is as expected:

[2]; [6]; [42];...

Without the two calls I get the following:

[2]; [6]; [6]; [42];...

I suspect that q.enqueue has its own buffer that holds the old data2 , so it must be called twice. This also fits with the first value not repeating because at that time the second q.enqueue is still empty. Not sure how to overcome this one quirk.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM