I'm training a RNN similar to the one in tensorflow tutorial, for sequential data. The data is [batch_size, step, dimension], and labels is [batch_size, num_classes].
Since sequence length is different for different samples, I would like to create batch training -- at each time I grab 32 samples data, padded them into the the longest sequence size and then fed them into rnn graph for training.
The data is defined as:
data = DataGenerator(data_path, label_path)
train_data, train_label, train_seqlen, test_data, test_label = data.train_test_data(0.2)
x = tf.placeholder(tf.float32, [batch_size, None, num_dim])
y = tf.placeholder(tf.float32, [batch_size, num_classes])
seqlen = tf.placeholder(tf.int32, [batch_size])
model = VariableSeqModel(x, y, seqlen)
Train_data is [batch_size, step, dim], train_label is [batch_size, num_classes]. Seqlen is [batch_size,1] for recording the actual sequence length of samples in each batch. Is that correct that I define x as [batch_size, None, num_dim] for variable sequence length?
After defining the RNN and data structure, launching the session as in this code sample:
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
step = 1
while step*batch_size < 1000:
batch_xx, batch_y, batch_seqlen = data.next(batch_size, train_data, train_label, train_seqlen)
batch_x = data.batch_padding(batch_xx,batch_seqlen)
sess.run(model.optimize, feed_dict={x: batch_xx, y: batch_y, seqlen: batch_seqlen})
step += 1
I hit upon the following ValueError (stacktrace below):
dynamic_rnn.py in <module>()
--> 129 sess.run(model.optimize, feed_dict={x: batch_xx, y: batch_y, seqlen: batch_seqlen})
tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
708 try:
709 result = self._run(None, fetches, feed_dict, options_ptr,
--> 710 run_metadata_ptr)
711 if run_metadata:
712 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
879 ' to a larger type (e.g. int64).')
880
--> 881 np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
882
883 if not subfeed_t.get_shape().is_compatible_with(np_val.shape):
numpy/core/numeric.pyc in asarray(a, dtype, order)
480
481 """
--> 482 return array(a, dtype, copy=False, order=order)
483
484 def asanyarray(a, dtype=None, order=None):
ValueError: setting an array element with a sequence.
I am stumped at this point. Any help appreciated!
I am not an expert but it seems to me that the problem is here
Seqlen is [batch_size,1]
As the tensorflow tutorial suggests , sequence length should be
sequence_length: (optional) An int32/int64 vector sized [batch_size].
You may try by populating your seqlen array accordingly. Hope this will help.
Problem solved. Simply declare seqlen = tf.placeholder(tf.int32, [None])
and it works good. "[None]" here indicates the dynamic tensor type of batch size and the batch_size is simply tf.int32.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.