简体   繁体   中英

Freezing graph to pb in Tensorflow2

We deploy lot of our models from TF1 by saving them through graph freezing:

tf.train.write_graph(self.session.graph_def, some_path)

# get graph definitions with weights
output_graph_def = tf.graph_util.convert_variables_to_constants(
        self.session,  # The session is used to retrieve the weights
        self.session.graph.as_graph_def(),  # The graph_def is used to retrieve the nodes
        output_nodes,  # The output node names are used to select the usefull nodes
)

# optimize graph
if optimize:
    output_graph_def = optimize_for_inference_lib.optimize_for_inference(
            output_graph_def, input_nodes, output_nodes, tf.float32.as_datatype_enum
    )

with open(path, "wb") as f:
    f.write(output_graph_def.SerializeToString())

and then loading them through:

with tf.Graph().as_default() as graph:
    with graph.device("/" + args[name].processing_unit):
        tf.import_graph_def(graph_def, name="")
            for key, value in inputs.items():
                self.input[key] = graph.get_tensor_by_name(value + ":0")

We would like to save TF2 models in similar way. One protobuf file which will include graph and weights. How can I achieve this?

I know that there are some methods for saving:

  • keras.experimental.export_saved_model(model, 'path_to_saved_model')

    Which is experimental and creates multiple files:(.

  • model.save('path_to_my_model.h5')

    Which saves h5 format:(.

  • tf.saved_model.save(self.model, "test_x_model")

    Which agains save multiple files:(.

I use TF2 to convert model like:

  1. pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
  2. After training, self.model.load_weights(self.checkpoint_path) load checkpoint , and convert to h5 : self.model.save(h5_path, overwrite=True, include_optimizer=False) ;
  3. convert h5 to pb :
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras

# necessary !!!
tf.compat.v1.disable_eager_execution()

h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
    output_names = [out.op.name for out in model.outputs]
    input_graph_def = sess.graph.as_graph_def()
    for node in input_graph_def.node:
        node.device = ""
    graph = graph_util.remove_training_nodes(input_graph_def)
    graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
    tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")

the above code is a little old. when convert vgg16, it could succeed, but it failed when convert resnet_v2_50 model. my tf version is tf 2.2.0 finally, I found a useful code snippet:

import tensorflow as tf
from tensorflow import keras
from tensorflow.python.framework.convert_to_constants import     convert_variables_to_constants_v2
import numpy as np


#set resnet50_v2 as a example
model = tf.keras.applications.ResNet50V2()
 
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(
    tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()
 
layers = [op.name for op in frozen_func.graph.get_operations()]
print("-" * 50)
print("Frozen model layers: ")
for layer in layers:
    print(layer)
 
print("-" * 50)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)
 
# Save frozen graph from frozen ConcreteFunction to hard drive
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
                  logdir="./frozen_models",
                  name="frozen_graph.pb",
                  as_text=False)

ref: https://github.com/leimao/Frozen_Graph_TensorFlow/tree/master/TensorFlow_v2 (update)

I encountered as similar issue and found a solution below, which is

 from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 from tensorflow.python.tools import optimize_for_inference_lib loaded = tf.saved_model.load('models/mnist_test') infer = loaded.signatures['serving_default'] f = tf.function(infer).get_concrete_function( flatten_input=tf.TensorSpec(shape=[None, 28, 28, 1], dtype=tf.float32)) # change this line for your own inputs f2 = convert_variables_to_constants_v2(f) graph_def = f2.graph.as_graph_def() if optimize: # Remove NoOp nodes for i in reversed(range(len(graph_def.node))): if graph_def.node[i].op == 'NoOp': del graph_def.node[i] for node in graph_def.node: for i in reversed(range(len(node.input))): if node.input[i][0] == '^': del node.input[i] # Parse graph's inputs/outputs graph_inputs = [x.name.rsplit(':')[0] for x in frozen_func.inputs] graph_outputs = [x.name.rsplit(':')[0] for x in frozen_func.outputs] graph_def = optimize_for_inference_lib.optimize_for_inference(graph_def, graph_inputs, graph_outputs, tf.float32.as_datatype_enum) # Export frozen graph with tf.io.gfile.GFile('optimized_graph.pb', 'wb') as f: f.write(graph_def.SerializeToString())

The way I do it at the moment is TF2 -> SavedModel (via keras.experimental.export_saved_model ) -> frozen_graph.pb (via the freeze_graph tools, which can take a SavedModel as input). I don't know if this is the "recommended" way to do this though.

Also, I still don't know how to load back the frozen model and run inference "the TF2 way" (aka no graphs, sessions, etc).

You may also take a look at keras.save_model('path', save_format='tf') which seems to produce checkpoint files (you still need to freeze them, though, so I personally think the saved model path is better)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM