I don't understand how to run the result of tf.train.batch for multiple epochs. It runs out once of course and I don't know how to restart it.
tile
, which is complicated but described in full here . batch_size
random integers between 0 and num_examples. (My examples all sit in local RAM). I haven't found an easy way to get these random draws at once. num_epochs
size, then shuffle. I think this is confusing because I'm not really building an input pipeline since my input fits in memory, but yet I still need to be building out batching, shuffling and multiple epochs which possibly requires more knowledge of input pipeline.
tf.train.batch
simply groups upstream samples into batches, and nothing more. It is meant to be used at the end of an input pipeline. Data and epochs are dealt with upstream.
For example, if your training data fits into a tensor, you could use tf.train.slice_input_producer
to produce samples. This function has arguments for shuffling and epochs.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.