I found there are different ways to save/restore models and variables in Tensorflow
. These ways including:
In tensorflow's documentations, I found some differences between them:
tf.saved_model
is a thin wrapper around tf.train.Saver
tf.train.Checkpoint
support eager execution but tf.train.Saver
not . tf.train.Checkpoint
not creating .meta
file but still can load graph structure (here is a big question! how it can do that?) How tf.train.Checkpoint
can load graph without .meta
file? or more generally What is the difference between tf.train.Saver
and tf.train.Checkpoint
?
According to Tensorflow docs :
Checkpoint.save
andCheckpoint.restore
write and read object-based checkpoints, in contrast totf.train.Saver
which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. Prefertf.train.Checkpoint
overtf.train.Saver
for new code .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.