简体   繁体   English

如何在张量流中迭代张量?

[英]How to iterate a tensor in tensorflow?

I am getting the following error: 我收到以下错误:

TypeError: 'Tensor' object is not iterable. TypeError:“张量”对象不可迭代。

I am trying to use a placeholder and FIFOQueue to feed data. 我正在尝试使用占位符和FIFOQueue来馈送数据。 But the problem here is I cannot batch the data. 但是这里的问题是我无法批处理数据。 Could any one provide a solution? 有人可以提供解决方案吗?

I'm new in TensorFlow and confused the concept of placeholder and tensor. 我是TensorFlow的新手,并且混淆了占位符和张量的概念。

Here is the code: 这是代码:

#-*- coding:utf-8 -*-  
import tensorflow as tf  
import sys  

q = tf.FIFOQueue(1000,tf.string)  
label_ph = tf.placeholder(tf.string,name="label")
enqueue_op = q.enqueue_many(label_ph)
qr = tf.train.QueueRunner(q,enqueue_op)  
m = q.dequeue()

sess_conf = tf.ConfigProto()
sess_conf.gpu_options.allow_growth = True
sess = tf.Session(config=sess_conf)  
sess.run(tf.global_variables_initializer())  
coord = tf.train.Coordinator()  
tf.train.start_queue_runners(coord=coord, sess=sess) 

image_batch = tf.train.batch(
        m,batch_size=3,
        enqueue_many=True,
        capacity=9
        )

for i in range(0, 10):  
    print "-------------------------"  
    #print(sess.run(q.dequeue()))  
    a = ['a','b','c','a1','b1','c1','a','b','c2','a','b','c3',]
    sess.run(enqueue_op,{label_ph:a})
    b = sess.run(m)
    print b
q.close()
coord.request_stop()  

I think you're running into the same issue I am. 我认为您遇到了同样的问题。 When you run the session you don't actually have access to the data, you have access to the data graph instead. 当您运行会话时,您实际上无权访问数据,而是有权访问数据图。 So you should think of the tensor objects like nodes in a graph and not like chunks of data you can do things with. 因此,您应该将张量对象想像成图形中的节点,而不是像您可以处理的数据块。 If you want to do things to the graph you have to call tf.* functions or get the variables inside a sess.run() call. 如果要对图形执行操作,则必须调用tf。*函数或在sess.run()调用中获取变量。 When you do that then tensorflow will figure out how to get that data based on its dependencies and run the computation. 当您这样做时,tensorflow将找出如何根据其依赖关系获取数据并运行计算。

As for your question, check out the QueueRunner example on this page. 关于您的问题,请查看此页面上的QueueRunner示例。 https://www.tensorflow.org/programmers_guide/threading_and_queues https://www.tensorflow.org/programmers_guide/threading_and_queues

Another way you could do it (which is what I switched to) is that you could shuffle your data on the cpu and then copy it all over at once. 可以执行此操作的另一种方法(这是我切换到的方法)是,您可以在cpu上随机整理数据,然后一次全部复制。 Then you keep track of what step you're on and grab the data for that step. 然后,您跟踪正在进行的步骤并获取该步骤的数据。 I helps to keep the gpu feed with data and reduces memory copies. 我可以帮助gpu feed保留数据并减少内存副本。

    all_shape = [num_batches, batch_size, data_len]
    local_shape = [batch_size, data_len]

    ## declare the data variables
    model.all_data = tf.Variable(tf.zeros(all_shape), dtype=tf.float32)
    model.step_data=tf.Variable(tf.zeros(local_shape), dtype=tf.float32)
    model.step = tf.Variable(0, dtype=tf.float32, trainable=False, name="step")

    ## then for each step in the epoch grab the data
    index = tf.to_int32(model.step)
    model.step_data = model.all_data[index]

    ## inc the step counter
    tf.assign_add(model.step, 1.0, name="inc_counter") 

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM