簡體   English   中英

Tensorflow:sess.run([x])不工作但sess.run([y])使用相同的feed_dict

[英]Tensorflow: sess.run([x]) not working but sess.run([y]) works with the same feed_dict

我正在學習Tensorboard,我正在學習本教程中的代碼

以下是我的代碼:

import tensorflow as tf
LOGDIR = "/tmp/mnist_tutorial/"
mnist = tf.contrib.learn.datasets.mnist.read_data_sets(train_dir=LOGDIR + "data", one_hot=True)

def conv_layer(input, size_in, size_out, name="conv"):
    with tf.name_scope(name):
        w = tf.Variable(tf.zeros([5, 5, size_in, size_out]))
        b = tf.Variable(tf.zeros([size_out]))
        conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1], padding="SAME")
        act = tf.nn.relu(conv + b)
        tf.summary.histogram("weights", w)
        tf.summary.histogram("biases", b)
        tf.summary.histogram("activations", act)
        return act


def fc_layer(input, size_in, size_out, name="fc"):
    with tf.name_scope(name):
        w = tf.Variable(tf.zeros([size_in, size_out]))
        b = tf.Variable(tf.zeros([size_out]))
        act = tf.nn.relu(tf.matmul(input, w)+b)
        tf.summary.histogram("weights", w)
        tf.summary.histogram("biases", b)
        tf.summary.histogram("activations", act)
        return act

x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
x_image = tf.reshape(x, [-1, 28, 28, 1])
tf.summary.image('input', x_image, 3)

y = tf.placeholder(tf.float32, shape=[None, 10], name='labels')

conv1 = conv_layer(x_image, 1, 32, name='conv1')
pool1 = tf.nn.max_pool(conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding="SAME")

conv2 = conv_layer(pool1, 32, 64, name='conv2')
pool2 = tf.nn.max_pool(conv2, ksize=[1,2,2,1], strides=[1,2,2,1], padding="SAME")
flattened = tf.reshape(pool2, [-1, 7*7*64])

fc1 = fc_layer(flattened, 7*7*64, 1024, name='fc1')
logits = fc_layer(fc1, 1024, 10, name='fc2')

with tf.name_scope('xent'):
    xent = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
    tf.summary.scalar('cross_entropy', xent)

with tf.name_scope('train'):
    train_step = tf.train.AdamOptimizer(1e-4).minimize(xent)

with tf.name_scope('accruacy'):
    correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    tf.summary.scalar('accruacy', accuracy)

summ = tf.summary.merge_all()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # writer =tf.summary.FileWriter("tmp/mnist_demo/1")
    # writer.add_graph(sess.graph)
    # writer.close()

    for i in range(20):
        batch = mnist.train.next_batch(100)

        # Occasionally report back the accruacy

        if i%2 == 0:
            [train_accruacy] = sess.run([accuracy], feed_dict={x:batch[0], y:batch[1]}) # works
#             [s, train_accruacy] = sess.run([summ, accuracy], feed_dict={x:batch[0], y:batch[1]}) #error!
            print("step %d, training accruacy %g" % (i, train_accruacy))

    sess.run(train_step, feed_dict={x:batch[0],y:batch[1]})

我使用這一行時遇到錯誤:

[s, train_accruacy] = sess.run([summ, accuracy], feed_dict={x:batch[0], y:batch[1]}) #error!

這是我收到的錯誤消息:

You must feed a value for placeholder tensor 'x' with dtype float and shape [?,784] [[{{node x}} = Placeholder[dtype=DT_FLOAT, shape=[?,784], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

據我所知,我輸入的張量不是(x,784)的正確形狀。

但是,我不明白為什么[train_accruacy] = sess.run([accuracy], feed_dict={x:batch[0], y:batch[1]}) # works 畢竟,我正在將相同的東西輸入到相同的占位符變量中,這些變量接受相同形狀的張量。

除非我完全弄錯,否則sess.run([argument], feed_dict=...)的第一個參數描述了要返回的張量。 我不知道這會如何影響我正在喂食的數據的形狀。

另外:這個模型應該有錯誤。

對於那些感興趣的人,完整的代碼在這里

也可能是返回數據類型不同嗎? tf.summary.merge_all()返回一個字符串張量,但我懷疑這是導致問題的原因。

我似乎無法在網上找到任何關於此問題的文檔。 這應該發生嗎?

我會回答我自己的問題:

tf.reset_default_graph()工作,在def conv_layer()之前添加。

如果不想使用tf.reset_default_graph()

事實證明,我在同一個會話中提供2個張量,張量流不允許。

for i in range(20):
    batch = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x:batch[0],y:batch[1]})
    # Occasionally report back the accruacy

    if i%2 == 0:
        [train_accruacy] = sess.run([accuracy], feed_dict={x:batch[0], y:batch[1]}) # works
#             [s, train_accruacy] = sess.run([summ, accuracy], feed_dict={x:batch[0], y:batch[1]}) #error!
        print("step %d, training accruacy %g" % (i, train_accruacy))

上面的代碼不起作用。 事實證明,出於某種原因, i%2下面的第一行代碼在張量batch[0]就好了,但是,當它注釋掉該行並用第二行替換它時,似乎tensorflow沒有'刷新' x占位符變量,因此2個單獨的張量被輸入到輸入(來自單獨的sess.run() )事件。

此代碼有效:

for i in range(2000):
    batch = mnist.train.next_batch(100)
    if i%10 !=0:
        sess.run(train_step, feed_dict={x:batch[0],y:batch[1]})
        # Occasionally report back the accruacy

    if i%10 == 0:
        [train_accuracy, s] = sess.run([accuracy, summ], feed_dict={x: batch[0], y: batch[1]})
        print("step %d, training accruacy %g" % (i, train_accuracy))
        writer.add_summary(s,i)

這里的張量是單獨輸入的,一切都運行良好。

如果有人能讓我知道為什么會這樣,或者這是一個錯誤,我會很高興。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM