简体   繁体   中英

Tensorflow. Difference between [batch_size, 1] & [batch_size]

In tensorflow tutorial for word embedding one finds:

# Placeholders for inputs
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])

What is possibly the difference between these two placeholders. Aren't they both a int32 column vector of size batch_size?

Thanks.

I found the answer with a little debugging.

[batch_size] = [ 0, 2, ...]
[batch_size, 1] = [ [0], [2], ...]

Though still don't know why using the second form.

train_inputs是行向量,而train_labels是列向量。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM