简体   繁体   中英

Why use None for the batch dimension in tensorflow?

In the following code, the None is used to declare the size of the placeholders.

x_data = tf.placeholder(tf.int32, [None, max_sequence_length]) 
y_output = tf.placeholder(tf.int32, [None])

As I know, this None is used to specify a variable batch dimension. But, in each code, we have a variable that shows the batch size, such as:

batch_size = 250

So, is there any reason to use None in such cases instead of simply declaring the placeholders as?

x_data = tf.placeholder(tf.int32, [batch_size, max_sequence_length]) 
y_output = tf.placeholder(tf.int32, [batch_size])

It is just so that the input of the network doesn't get bounded to a fixed-sized batches, and you can later reuse the learnt network to predict either single instances or arbitrarily long batches (eg predict all your test samples at once).

In other words, it doesn't do much during training, as batches are usually of a fixed size during tranning anyway, but it makes the network more useful when testing.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM