[英]TensorFlow train batches for multiple epochs?
I don't understand how to run the result of tf.train.batch for multiple epochs. 我不明白如何针对多个时期运行tf.train.batch的结果。 It runs out once of course and I don't know how to restart it. 它当然会用完一次,我不知道如何重新启动它。
tile
, which is complicated but described in full here . 也许我可以使用tile
重复它,这很复杂, 但在此完整介绍 。 batch_size
random integers between 0 and num_examples. 如果我每次都可以重画一个批次,那将很好-我需要在0到num_examples之间的batch_size
随机整数。 (My examples all sit in local RAM). (我的示例全部位于本地RAM中)。 I haven't found an easy way to get these random draws at once. 我还没有找到一种简单的方法来立即获得这些随机抽奖。 num_epochs
size, then shuffle. 理想情况下 ,当重复批处理时,也应该进行改组,但是对我来说,运行一个纪元然后进行改组等更有意义,而不是将训练空间加入其自身的num_epochs
大小,然后进行num_epochs
。 I think this is confusing because I'm not really building an input pipeline since my input fits in memory, but yet I still need to be building out batching, shuffling and multiple epochs which possibly requires more knowledge of input pipeline. 我认为这很令人困惑,因为由于我的输入适合内存,所以我并没有真正建立输入管道,但是我仍然需要建立批处理,改组和多个时期,这可能需要更多关于输入管道的知识。
tf.train.batch
simply groups upstream samples into batches, and nothing more. tf.train.batch
只是将上游样本分为几批,仅此而已。 It is meant to be used at the end of an input pipeline. 它打算在输入管道的末尾使用。 Data and epochs are dealt with upstream. 数据和纪元在上游处理。
For example, if your training data fits into a tensor, you could use tf.train.slice_input_producer
to produce samples. 例如,如果您的训练数据适合张量,则可以使用tf.train.slice_input_producer
生成样本。 This function has arguments for shuffling and epochs. 此函数具有改组和时期的参数。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.