[英]Advantage of using experiments in TensorFlow
Many of TensorFlow's example applications create Experiment
s and run one of the Experiment
's methods by calling tf.contrib.data.learn_runner.run
. 许多TensorFlow的示例应用程序通过调用
tf.contrib.data.learn_runner.run
创建Experiment
并运行其中一个Experiment
的方法。 It looks like an Experiment
is essentially a wrapper for an Estimator
. 它看起来像一个
Experiment
本质上是一个Estimator
的包装Estimator
。
The code needed to create and run an Experiment
looks more complex than the code needed to create, train, and evaluate an Estimator
. 创建和运行
Experiment
所需的代码看起来比创建,训练和评估Estimator
所需的代码更复杂。 I'm sure there's an advantage to using Experiment
s, but I can't figure out what it is. 我确信使用
Experiment
有一个优势,但我无法弄清楚它是什么。 Could someone fill me in? 有人能填补我吗?
tf.contrib.learn.Experiment
is a high-level API for distributed training. tf.contrib.learn.Experiment
是一个用于分布式培训的高级API。 Here's from its doc: 这是来自它的文档:
Experiment is a class containing all information needed to train a model.
实验是一个包含训练模型所需的所有信息的类。
After an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training .
在创建实验之后(通过传递Estimator和输入进行训练和评估),实验实例知道如何以合理的方式调用训练和评估循环以进行分布式训练 。
Just like tf.estimator.Estimator
(and the derived classes) is a high-level API that hides matrix multiplications, saving checkpoints and so on, tf.contrib.learn.Experiment
tries to hide the boilerplate you'd need to do for distributed computation , namely tf.train.ClusterSpec
, tf.train.Server
, jobs, tasks, etc. 就像
tf.estimator.Estimator
(和派生类)是一个隐藏矩阵乘法,保存检查点等的高级API, tf.contrib.learn.Experiment
试图隐藏你需要为分布式做的样板文件计算 ,即tf.train.ClusterSpec
, tf.train.Server
,作业,任务等。
You can train and evaluate the tf.estimator.Estimator
on a single machine without an Experiment
. 您可以在没有
Experiment
情况下在单台机器上训练和评估tf.estimator.Estimator
。 See the examples in this tutorial . 请参阅本教程中的示例。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.