简体   繁体   中英

Training in batches but testing individual data item in Tensorflow?

I have trained a convolution neural network with batch size of 10. However when testing, I want to predict the classification for each dataset separately and not in batches, this gives error:

Assign requires shapes of both tensors to match. lhs shape= [1,3] rhs shape= [10,3]

I understand 10 refers to batch_size and 3 is the number of classes that I am classifying into.

Can we not train using batches and test individually?

Update:

Training Phase:

batch_size=10
classes=3
#vlimit is some constant : same for training and testing phase
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

Testing Phase:

batch_size=1
classes=3
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

Absolutely. Placeholders are 'buckets' that get fed data from your inputs. The only thing they do is direct data into your model. They can act like 'infinite buckets' using the None trick - you can chuck as much (or as little) data into them as you want (depending on available resources obviously).

In training, try replacing batch_size with None for the Training placeholders:

X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')

Then define everything else you have as before.

Then do some training ops, for example:

 _, Tr_loss, Tr_acc = sess.run([optimizer, loss, accuracy], feed_dict{x: btc_x, y: btc_y})

For testing, re-use these same placeholders ( X , Y ) and don't bother redefining the other variables.

All Tensorflow variables are static for a single Tensorflow graph definition. If you're restoring the model, then the placeholders still exist from when it was trained. As will the other variables eg w , b , logits , entropy & optimizer .

Then do some testing op, for example:

 Ts_loss, Ts_acc = sess.run( [loss, accuracy], feed_dict{ x: test_x , y: test_y } )

When you define your placeholder, use:

X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')
...

instead for both your training and testing phase (actually, you shouldn't need to re-define these for the testing phase). Also define your bias as:

b = tf.Variable(tf.ones([classes]), name="bias")

Otherwise you are training a separate bias for each sample in your batch, which is not what you want.

TensorFlow should automatically unroll along the first dimension of your input and recognize that as the batch size, so for training you can feed it batches of 10, and for testing you can feed it individual samples (or batches of 100 or whatever).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM