简体   繁体   English

TensorFlow:InvalidArgumentError:In[0] 不是矩阵

[英]TensorFlow: InvalidArgumentError: In[0] is not a matrix

I am new to TensorFlow and need to implement a deep neural network for a regression task.我是 TensorFlow 的新手,需要为回归任务实现一个深度神经网络。 I assume there are no such sample codes on the internet where regression is performed using deep neural network (at least I could not find any. Please post any helpful link, if available).我假设互联网上没有使用深度神经网络执行回归的此类示例代码(至少我找不到。请发布任何有用的链接,如果可用)。 So, I have tried to merge the tutorials on deep neural networks for classification and regression together for my purpose.因此,为了我的目的,我尝试将有关用于分类和回归的深度神经网络的教程合并在一起。 As expected, I am bombarded with errors.正如预期的那样,我被错误轰炸。 The error message reads:错误信息如下:

InvalidArgumentError: In[0] is not a matrix
 [[Node: MatMul_35 = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_36_0, Variable_72/read)]]  

The code:代码:

import tensorflow as tf
import numpy
import matplotlib.pyplot as plt

n_nodes_hl1 = 100
n_nodes_hl2 = 100

batch_size = 100

n_input = 1;
n_output = 1;
learning_rate = 0.01

train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
                 7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
                 2.827,3.465,1.65,2.904,2.42,2.94,1.3])

x = tf.placeholder('float')
y = tf.placeholder('float')

def neural_network_model(data):
   hidden_1_layer = {'weights':tf.Variable(tf.random_normal([n_input, n_nodes_hl1])),
                  'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}

   hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                  'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}

   l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases'])
   l1 = tf.nn.relu(l1)

   l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']), hidden_2_layer['biases'])
   l2 = tf.nn.relu(l2)

   output = tf.reduce_sum(l2)

   return output

def train_neural_network(x):
   prediction = neural_network_model(x)
   cost = tf.square(y - prediction)

   optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

   hm_epochs = 5
   with tf.Session() as sess:

      sess.run(tf.global_variables_initializer())

      for epoch in range(hm_epochs):
         epoch_loss = 0
         for (X, Y) in zip(train_X, train_Y):
             _, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y})
             epoch_loss += c

         print('Epoch', epoch, 'completed out of',hm_epochs,'loss:',epoch_loss)

      plt.plot(train_X, train_Y, 'ro', label='Original data')
      plt.plot(train_X, prediction, label='Fitted line')
      plt.legend()
      plt.show()

      test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
      test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])
      print("Testing Data")

      correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
      accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
      print('Accuracy:',accuracy.eval({x:test_X, y:test_Y}))

train_neural_network(x)

As far I guess there is an issue with the dimensions of the hidden layer weights and/or biases (I may be wrong).就我而言,我猜隐藏层权重和/或偏差的维度存在问题(我可能错了)。

Side note: Here I have just tried to make a simple model of my project where the training and testing data points have been taken from the internet examples.旁注:这里我只是尝试为我的项目制作一个简单的模型,其中训练和测试数据点取自互联网示例。 My actual data would be pixel values of several images.我的实际数据将是几个图像的像素值。

Change this line (working for me) :更改此行(为我工作):

  1. Inputs to matmul() functions should be a matrix - you are feeding a value. matmul() 函数的输入应该是一个矩阵 - 您正在输入一个值。

     _, c = sess.run([optimizer, cost], feed_dict={x: [[X]], y: [[Y]]})

Output:输出:

('Epoch', 0, 'completed out of', 5, 'loss:', array([[  1.20472407e+14]], dtype=float32))
('Epoch', 1, 'completed out of', 5, 'loss:', array([[ 6.82631159]], dtype=float32))
('Epoch', 2, 'completed out of', 5, 'loss:', array([[ 8.83840561]], dtype=float32))
('Epoch', 3, 'completed out of', 5, 'loss:', array([[ 8.00222397]], dtype=float32))
('Epoch', 4, 'completed out of', 5, 'loss:', array([[ 7.6564579]], dtype=float32))

Hope this helps !希望这有帮助!

Comment: This is not a good example to explore if you're going to work with images.评论:如果您要处理图像,这不是一个很好的探索示例。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Tensorflow 无效形状(InvalidArgumentError) - Tensorflow invalid shape (InvalidArgumentError) Tensorflow:InvalidArgumentError:预期图像(JPEG、PNG 或 GIF),文件为空 - Tensorflow: InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got empty file Tensorflow InvalidArgumentError(请参阅上面的回溯):平面索引不会索引到参数中 - Tensorflow InvalidArgumentError (see above for traceback): flat indices does not index into param python代码中的tensorflow.python.framework.errors_impl.InvalidArgumentError - tensorflow.python.framework.errors_impl.InvalidArgumentError in python code InvalidArgumentError 在 TensorFlow 2.3.1 数据集图中生成字典列表时 - InvalidArgumentError when yielding List of Dicts inside TensorFlow 2.3.1 Dataset graph tensorflow.python.framework.errors_impl.InvalidArgumentError:断言失败: - tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: Tensorflow 2.0 InvalidArgumentError:断言失败:[条件 x == y 没有按元素保持:] - Tensorflow 2.0 InvalidArgumentError: assertion failed: [Condition x == y did not hold element-wise:] 无法将 Python 列表转换为 Tensorflow 数据集(InvalidArgumentError:所有输入的形状必须匹配...) - Can't convert Python list to Tensorflow Dataset (InvalidArgumentError: Shapes of all inputs must match...) (使用Keras)在Tensorflow中'InvalidArgumentError:不兼容的形状:[10,2] vs. [10]'是什么原因? - What is the cause of 'InvalidArgumentError: Incompatible shapes: [10,2] vs. [10]' in tensorflow (with Keras)? CIFAR-10 TensorFlow:InvalidArgumentError(请参阅上面的回溯):logits和标签必须是可广播的 - CIFAR-10 TensorFlow: InvalidArgumentError (see above for traceback): logits and labels must be broadcastable
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM