简体   繁体   中英

Difference between tf.train.Checkpoint and tf.train.Saver

I found there are different ways to save/restore models and variables in Tensorflow . These ways including:

In tensorflow's documentations, I found some differences between them:

  1. tf.saved_model is a thin wrapper around tf.train.Saver
  2. tf.train.Checkpoint support eager execution but tf.train.Saver not .
  3. tf.train.Checkpoint not creating .meta file but still can load graph structure (here is a big question! how it can do that?)

How tf.train.Checkpoint can load graph without .meta file? or more generally What is the difference between tf.train.Saver and tf.train.Checkpoint ?

According to Tensorflow docs :

Checkpoint.save and Checkpoint.restore write and read object-based checkpoints, in contrast to tf.train.Saver which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. Prefer tf.train.Checkpoint over tf.train.Saver for new code .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM