[英]Training batches: which Tensorflow method is the right one?
I'm trying to train a very simple neural network to classify samples of data where some classes necessarily succeed others - this is why I decided to let the input data enter the network in batches.我正在尝试训练一个非常简单的神经网络来对数据样本进行分类,其中某些类必然会继承其他类 - 这就是我决定让输入数据分批进入网络的原因。 Using Tensorflow, apparently you get multiple ways of declaring batches, like
tf.data.Dataset.batch
(with which I currently train using the Adam Optimizer) and tf.train.batch
.使用 Tensorflow,显然您可以获得多种声明批次的方法,例如
tf.data.Dataset.batch
(我目前使用 Adam Optimizer 进行训练)和tf.train.batch
。 Where is the difference?区别在哪里? Should the methods be used together or are they exclusive?
这些方法应该一起使用还是排他的? In the latter case: which one should I prefer?
在后一种情况下:我应该更喜欢哪个?
tf.train.*
is an older API, more complex and prone to errors than the tf.data.*
one (you need to take care yourself of queues, thread runners, coordinator, etc). tf.train.*
是一种较旧的 API,比tf.data.*
更复杂且容易出错(您需要照顾好队列、线程运行器、协调器等)。 For your stated purpose (batching data and feeding it to a model), the two are functionally equivalent, as in both achieve your goal.对于您的既定目的(批处理数据并将其提供给模型),两者在功能上是等效的,因为两者都可以实现您的目标。 However, you should consider using
tf.data
as that's both simpler to use and the currently recommended way to handle input datasets.但是,您应该考虑使用
tf.data
因为它更易于使用,也是当前推荐的处理输入数据集的方式。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.