简体   繁体   English

tf.train.Checkpoint和tf.train.Saver之间的区别

[英]Difference between tf.train.Checkpoint and tf.train.Saver

I found there are different ways to save/restore models and variables in Tensorflow . 我发现有多种方法可以在Tensorflow保存/恢复模型和变量。 These ways including: 这些方式包括:

In tensorflow's documentations, I found some differences between them: 在tensorflow的文档中,我发现它们之间存在一些差异:

  1. tf.saved_model is a thin wrapper around tf.train.Saver tf.saved_model是围绕一个瘦包装tf.train.Saver
  2. tf.train.Checkpoint support eager execution but tf.train.Saver not . tf.train.Checkpoint支持急于执行,但tf.train.Saver 支持。
  3. tf.train.Checkpoint not creating .meta file but still can load graph structure (here is a big question! how it can do that?) tf.train.Checkpoint不会创建.meta文件,但仍然可以加载图形结构(这是一个大问题!它如何做到这一点?)

How tf.train.Checkpoint can load graph without .meta file? tf.train.Checkpoint如何在没有.meta文件的情况下加载图形? or more generally What is the difference between tf.train.Saver and tf.train.Checkpoint ? 或更广泛地说, tf.train.Savertf.train.Checkpoint什么tf.train.Checkpoint

According to Tensorflow docs : 根据Tensorflow 文档

Checkpoint.save and Checkpoint.restore write and read object-based checkpoints, in contrast to tf.train.Saver which writes and reads variable.name based checkpoints. Checkpoint.saveCheckpoint.restore写入和读取基于对象的检查点,而tf.train.Saver可以写入和读取基于variable.name的检查点。 Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. 基于对象的检查点保存带有命名边的Python对象(层,优化程序,变量等)之间的依存关系图,该图用于在恢复检查点时匹配变量。 It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. 它对Python程序中的更改可能更健壮,并有助于在急切执行时支持变量的创建时恢复。 Prefer tf.train.Checkpoint over tf.train.Saver for new code . 对于新代码, tf.train.Saver使用tf.train.Checkpoint不是tf.train.Saver

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM