简体   繁体   中英

how to use tensorflow saver with multiple models?

I'm having a lot of trouble understanding the proper use of tf.train.Saver

I have a session where I create several distinct and separate network models. All models are trained and I save the best performing networks for later use.

However, when I try to restore a model at a later time I get an error which seems to indicate that some variables are either not getting saved or restored:

NotFoundError: Tensor name "Network_8/train/beta2_power" not found in checkpoint files networks/network_0.ckpt

for some reason, when I try and load the variables for Network_0 I'm being told I need variable information for Network_8.

What is the best way to make sure I only save/restore the correct variables from a multi-network session?

It seems part of my problem is that, while I have created a dict object for the Variables I want to save (weights and biases) for each network, when I setup an optimizer such as the AdamOptimizer tensorflow automatically creates extra variables which need to be initialized. This is fine if you use tf.train.Saver to save ALL variables and you only have one network, however I am training multiple networks and only saving the best results. I'm not sure how to specify the variables tf auto adds to my dict for saving.

My solution is to create a part_saver with the same tensor name both in the original model and the new model (ie Network_0 and Network_8) which only restores the needed variables.

part_saver = tf.train.Saver({"W":w,"b":b,...})

Init all the variables in Network_8 before restoring the partial model.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM