简体   繁体   English

张量流中的InvalidArgumentError(softmax mnist)

[英]InvalidArgumentError in tensorflow (softmax mnist)

When I was trying to accomplish softmax regression with tensorflow, some problems occurred as below: 当我尝试使用张量流完成softmax回归时,出现了以下一些问题:

tensorflow.python.framework.errors_impl.InvalidArgumentError: tensorflow.python.framework.errors_impl.InvalidArgumentError:
You must feed a value for placeholder tensor 'Placeholder_1' with dtype float [[Node: Placeholder_1 = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]] 您必须使用dtype float [[Node:Placeholder_1 = Placeholderdtype = DT_FLOAT,shape = [],_device =“ / job:localhost / replica:0 / task:0 / cpu:0”]为占位符张量'Placeholder_1'提供一个值。 ]

From above description, I understand that the problem is an argument type error. 通过上面的描述,我知道问题是参数类型错误。 But in my code, the type of my data is same as the placeholder. 但是在我的代码中,我的数据类型与占位符相同。

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

m = input_data.read_data_sets("MNIST_data/", one_hot=True)
sess = tf.InteractiveSession()

x = tf.placeholder(tf.float32, [None, 784])
w = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

y = tf.nn.softmax(tf.matmul(x, w)+b)
y_ = tf.placeholder(tf.float32, [None, 10])

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
tf.global_variables_initializer()

for i in range(1000):
    batch_xs, batch_ys = m.train.next_batch(100)
    train_step.run({x: batch_xs, y: batch_ys})

correct_prediction = tf.equal(tf.arg_max(y, 1), tf.arg_max(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: m.test.images, y: m.test.labels}))

I think the problem is caused by the type of batch_xs(float32) and batch_ys(float32). 我认为问题是由batch_xs(float32)和batch_ys(float32)的类型引起的。

Any suggestions on how to solve this? 关于如何解决这个问题有什么建议吗?

The problem is caused by the fact that you're passing y instead of y_ into the feed_dict of the accuracy.eval call. 该问题是由你传递造成的事实y代替y_到的feed_dict accuracy.eval电话。

In this way, you're overwriting the value of y and your placeholder y_ is not used. 这样,您将覆盖y的值,并且不使用占位符y_

Just change the line to 只需将行更改为

print(accuracy.eval({x: m.test.images, y_: m.test.labels}))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM