简体   繁体   中英

FailedPreconditionError: GetNext() failed after loading a Tensorflow Saved_Model

I built a dedicated class to build, train, save and then load my models. Saving is done with tf.saved_model.simple_save and then restored through tf.saved_model.loader.load .

Training and inference are done using the Dataset API. Everything works fine when using a trained model.

However if I restore a saved model, then inference breaks and throws this error:

FailedPreconditionError (see above for traceback): GetNext() failed because the iterator has not been initialized. Ensure that you have run the initializer operation for this iterator before getting the next element.

[[Node: datasets/cond/IteratorGetNext_1 = IteratorGetNextoutput_shapes=[[?,?,30], [?,5]], output_types=[DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

I am certain that the iterator is initialized (the print is displayed as expected, see code below). May it have to do with the graphs variables belong to? Any other idea? I'm kind of stuck here

(simplified) Code :

class Model():
    def __init__(self):
        self.graph = tf.Graph()
        self.sess = tf.Session(graph=self.graph)
        with self.graph.as_default():
            model.features_data_ph = tf.Placeholder(...)
            model.labels_data_ph = tf.Placeholder(...)

    def build(self):
        with self.graph.as_default():
            self.logits = my_model(self.input_tensor)
            self.loss = my_loss(self.logits, self.labels_tensor)

    def train(self):
        my_training_procedure()

    def set_datasets(self):
        with self.graph.as_default():
            with tf.variable_scope('datasets'):
                self.dataset = tf.data.Dataset.from_tensor_slices((self.features_data_ph, self.labels_data_ph))
                self.iter = self.dataset.make_initializable_iterator()
                self.input_tensor, self.labels_tensor = self.iter.get_next

    def initialize_iterators(self, inference_data):
        with self.graph.as_default():
            feats = inference_data
            labs = np.zeros((len(feats), self.hp.num_classes))
            self.sess.run(self.iter.initializer,
                feed_dict={self.features_data_ph: feats,
                    self.labels_data_ph: labs})
            print('Iterator ready to infer')

    def infer(self, inference_data):
        self.initialize_iterators(inference_data)
        return sess.run(self.logits)

    def save(self, path):
        inputs = {"features_data_ph": self.features_data_ph,
            "labels_data_ph": self.labels_data_ph}
        outputs = {"logits": self.model.logits}
        tf.saved_model.simple_save(self.sess, path)

    @staticmethod
    def restore(path):
        model = Model()
        tf.saved_model.loader.load(model.sess, [tag_constants.SERVING], path)
        model.features_data_ph = model.graph.get_tensor_by_name("features_data_ph:0")
        model.labels_data_ph = model.graph.get_tensor_by_name("labels_data_ph:0")
        model.logits = model.graph.get_tensor_by_name("model/classifier/dense/BiasAdd:0")
        model.set_datasets()
        return model

Failing routine :

model1 = Model()
model1.build()
model1.train()
model1.save(model1_path)

...

model2 = Model.restore(model1_path)
model2.infer(some_numpy_array) # Error here, after print, at sess.run()

(Restoring the model works, tensor values match between original and restored models)

I ran into the same problem and I believe the issue is that you're initializing a new Dataset object rather than initializing the Iterator that was saved with the model.

Try:

make_iter = model.get_operation_by_name("YOURPREFIX/MakeIterator")
sess.run(make_iter, feed_dict)
model.infer(some_numpy_array)

I solved the problem by changing the way I create the Dataset

iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
dataset_init_op = iterator.make_initializer(dataset, name='dataset_init')
...
#retstoring
dataset_init_op = restored_graph.get_operation_by_name('dataset_init')
sess.run(
    dataset_init_op,
    feed_dict={...}
)

A working piece of code is available there -> https://vict0rsch.github.io/2018/05/17/restore-tf-model-dataset/

a simple way: before the loop , add one line code:

tf.add_to_collection("saved_model_main_op",tf.group([train_iter], name='legacy_init_op'))

"saved_model_main_op" is fixed.

train_iter is the opt that initialize the iterator

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM