[英]Difference between tf.train.Checkpoint and tf.train.Saver
I found there are different ways to save/restore models and variables in Tensorflow
. 我发现有多种方法可以在
Tensorflow
保存/恢复模型和变量。 These ways including: 这些方式包括:
In tensorflow's documentations, I found some differences between them: 在tensorflow的文档中,我发现它们之间存在一些差异:
tf.saved_model
is a thin wrapper around tf.train.Saver
tf.saved_model
是围绕一个瘦包装tf.train.Saver
tf.train.Checkpoint
support eager execution but tf.train.Saver
not . tf.train.Checkpoint
支持急于执行,但tf.train.Saver
不支持。 tf.train.Checkpoint
not creating .meta
file but still can load graph structure (here is a big question! how it can do that?) tf.train.Checkpoint
不会创建.meta
文件,但仍然可以加载图形结构(这是一个大问题!它如何做到这一点?) How tf.train.Checkpoint
can load graph without .meta
file? tf.train.Checkpoint
如何在没有.meta
文件的情况下加载图形? or more generally What is the difference between tf.train.Saver
and tf.train.Checkpoint
? 或更广泛地说,
tf.train.Saver
和tf.train.Checkpoint
什么tf.train.Checkpoint
?
According to Tensorflow docs : 根据Tensorflow 文档 :
Checkpoint.save
andCheckpoint.restore
write and read object-based checkpoints, in contrast totf.train.Saver
which writes and reads variable.name based checkpoints.Checkpoint.save
和Checkpoint.restore
写入和读取基于对象的检查点,而tf.train.Saver
可以写入和读取基于variable.name的检查点。 Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint.基于对象的检查点保存带有命名边的Python对象(层,优化程序,变量等)之间的依存关系图,该图用于在恢复检查点时匹配变量。 It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly.
它对Python程序中的更改可能更健壮,并有助于在急切执行时支持变量的创建时恢复。 Prefer
tf.train.Checkpoint
overtf.train.Saver
for new code .对于新代码,
tf.train.Saver
使用tf.train.Checkpoint
不是tf.train.Saver
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.