繁体   English   中英

InvalidArgumentError(请参阅上面的回溯):您必须使用dtype float输入占位符张量“ Placeholder”的值

[英]InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float

这是我的代码:

import numpy as np
import tensorflow as tf

input_dim=8
layer1_dim=6

learning_rate=0.01

train_data=np.loadtxt("data.txt",dtype=float)
train_target=train_data[:,-1]
train_feature=train_data[:,0:-1]
test_data=np.loadtxt("data.txt",dtype=float)
test_target=test_data[:,-1]
test_feature=test_data[:,0:-1]


x=tf.placeholder(tf.float32)
y=tf.placeholder(tf.float32)

w1=tf.Variable(tf.random_normal([input_dim,layer1_dim]))


b1=tf.Variable(tf.random_normal([1,layer1_dim]))


layer_1 = tf.nn.tanh(tf.add(tf.matmul(x, w1), b1))


loss=tf.reduce_mean(tf.square(layer_1-y))

train_op =   tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

 init = tf.global_variables_initializer()

with tf.Session() as session:
    session.run(init)

    for i in range(10):
        print(session.run(train_op, feed_dict={x: train_feature, y: train_target}))
        print(layer_1)
        print(loss.eval())

这是我的错误:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float
 [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

流程以退出代码1完成

数据只是一个普通矩阵,它是6x8特征和6x1目标。 sess.run的打印为“无”。 如果我不打印损失,则没有错误,但没有sess.run。

如果输入确实是您想要的,则应仔细检查。 以下代码段有效:

import numpy as np
import tensorflow as tf

input_dim = 8
layer1_dim = 6
learning_rate = 0.01

train_data = np.random.randn(6, 9).astype(np.float32)
train_target = np.expand_dims(train_data[:, -1], axis=-1)
train_feature = train_data[:, 0:-1]

assert train_feature.dtype == np.float32
assert train_target.dtype == np.float32
assert train_feature.shape == (6, 8)
assert train_target.shape == (6, 1)


x = tf.placeholder(tf.float32, name='plhdr_X')
y = tf.placeholder(tf.float32, name='pldhr_Y')

w1 = tf.Variable(tf.random_normal([input_dim, layer1_dim]))
b1 = tf.Variable(tf.random_normal([1, layer1_dim]))

layer_1 = tf.nn.tanh(tf.add(tf.matmul(x, w1), b1))
loss = tf.reduce_mean(tf.square(layer_1 - y))

train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as session:
    session.run(init)
    for i in range(10):
        _, err = session.run([train_op, loss], feed_dict={
                             x: train_feature, y: train_target})
        print err

如果为每个占位符花费一个名称,则会获得更多详细信息。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM