I am doing Text Classification by Convolution Neural Network. In the example MNIST they have 60.000 images examples of hand-written digits, each image has size 28 x 28 and there are 10 labels (from 0 to 9). So the size of Weight will be 784 * 10 (28 * 28 = 784)
Here is their code:
x = tf.placeholder("float", [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
In my case, I applied word2vec to encode my documents. The result "dictionary size" of word embedding is 2000 and the embedding size is 128. There are 45 labels. I tried to do the same as the example but It did not work. Here what I did: I treated each document same as image. For instance, The document could be represent as the matrix of 2000 x 128 (for words appear in document I appended the word Vector value for that column and left other equal zero. I have a trouble with creating W and x since my input data is a numpy array of 2000 x 128 while x = tf.placeholder("float", [None, 256000])
. The size did not match.
Could nay one suggest any advises ?
Thanks
Placeholder x
is an array of flattened images, where first dimension None
corresponds to batch size, ie number of images, and 256000 = 2000 * 128
. So, in order to feed x
properly you need to flatten your input. Since you mention that your input is numpy arrays, take a look at numpy.reshape and flatten .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.