简体   繁体   English

frozen_inference_graph.pb 和saved_model.pb 有什么区别?

[英]What is difference frozen_inference_graph.pb and saved_model.pb?

I have a trained model (Faster R-CNN) which I exported using export_inference_graph.py to use for inference.我有一个经过训练的模型(Faster R-CNN),我使用export_inference_graph.py将其导出以用于推理。 I'm trying to understand the difference between the created frozen_inference_graph.pb and saved_model.pb and also model.ckpt* files.我试图了解创建的frozen_inference_graph.pbsaved_model.pb以及model.ckpt*文件之间的区别。 I've also seen .pbtxt representations.我也看过.pbtxt表示。

I tried reading through this but couldn't really find the answers: https://www.tensorflow.org/extend/tool_developers/我试着通读这个,但无法真正找到答案: https : //www.tensorflow.org/extend/tool_developers/

What do each of these files contain?这些文件中的每一个都包含什么? Which ones can be converted to which other ones?哪些可以转换为其他哪些? What is the ideal purpose of each?每个人的理想目的是什么?

frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a serialized graph and can be loaded with this code: freeze_inference_graph.pb,是一个不能再训练的冻结图,它定义了graphdef,实际上是一个序列化的图,可以用以下代码加载:

def load_graph(frozen_graph_filename):
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        return graph_def
tf.import_graph_def(load_graph("frozen_inference_graph.pb"))

the saved model is a model generated by tf.saved_model.builder and is has to be imported into a session, this file contains the full graph with all training weights (just like the frozen graph) but here can be trained upon, and this one is not serialized and needs to be loaded by this snippet.保存的模型是由 tf.saved_model.builder 生成的模型,必须导入到会话中,该文件包含具有所有训练权重的完整图(就像冻结图一样)但可以在此处进行训练,而这个未序列化,需要通过此代码段加载。 The [] are tagconstants which can be read by the saved_model_cli . [] 是可以被saved_model_cli读取的标签常量 This model is also often served to predict on, like google ml engine par example:此模型也经常用于预测,例如 google ml 引擎 par 示例:

with tf.Session() as sess:
    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")

model.ckpt files are checkpoints, generated during training, this is used to resume training or to have a back up when something goes wrong after along training. model.ckpt 文件是在训练期间生成的检查点,用于恢复训练或在训练后出现问题时进行备份。 If you have a saved model and a frozen graph, then you can ignore this.如果您有一个保存的模型和一个冻结图,那么您可以忽略它。

.pbtxt files are basically the same as previous discussed models, but then human readable, not binary. .pbtxt 文件与前面讨论的模型基本相同,但人类可读,而不是二进制文件。 These can be ignored as well.这些也可以忽略。

To answer your conversion question: saved models can be transformed into a frozen graph and vice versa, although a saved_model extracted from a frozen graph is also no trainable, but the way it is stored is in saved model format.回答您的转换问题:保存的模型可以转换为冻结图,反之亦然,虽然从冻结图中提取的 saved_model 也是不可训练的,但它的存储方式是保存的模型格式。 Checkpoints can be read in and loaded into a session, and there you can build a saved model from them.可以读入检查点并将其加载到会话中,然后您可以从中构建保存的模型。

Hope I helped, any questions, ask away!希望能帮到你,有什么问题请追问!

ADDITION:添加:

How to freeze a graph, starting from a saved model folder structure.如何冻结图形,从保存的模型文件夹结构开始。 This post is old, so the method I used before might not work anymore, it will most likely still work with Tensorflow 1.+.这篇文章很旧,所以我之前使用的方法可能不再适用,它很可能仍然适用于 Tensorflow 1.+。

Start of by downloading this file from the tensorflow library , and then this code snippit should do the trick:首先从 tensorflow 库下载这个文件,然后这个代码片段应该可以解决问题:

    import freeze_graph # the file you just downloaded
    from tensorflow.python.saved_model import tag_constants # might be unnecessary

    freeze_graph.freeze_graph(
        input_graph=None,
        input_saver=None,
        input_binary=None,
        input_checkpoint=None,
        output_node_names="dense_output/BiasAdd",
        restore_op_name=None,
        filename_tensor_name=None,
        output_graph=os.path.join(path, "frozen_graph.pb"),
        clear_devices=None,
        initializer_nodes=None,
        input_saved_model_dir=path,
        saved_model_tags=tag_constants.SERVING
    )

output_node_names = Node name of the final operation, if you end on a dense layer, it will be dense layer_name/BiasAdd output_node_names = 最终操作的节点名,如果以dense layer结束,则为dense layer_name/BiasAdd

output_graph = output graph name output_graph = 输出图名称

input_saved_model_dir = root folder of the saved model input_saved_model_dir = 保存模型的根文件夹

saved_model_tags = saved model tags, in your case this can be None, I did however use a tag. saved_model_tags = 保存的模型标签,在您的情况下,这可以是 None,但我确实使用了标签。

ANOTHER ADDITION:另一个补充:

The code to load models is already provided above.上面已经提供了加载模型的代码。 To actually predict you need a session, for a saved model this session is already created, for a frozen model, it's not.为了实际预测您需要一个会话,对于已保存的模型,此会话已经创建,对于冻结模型,则不是。

saved model:保存模型:

with tf.Session() as sess:
    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")
    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})

Frozen model:冷冻模型:

tf.import_graph_def(load_graph("frozen_inference_graph.pb"))
with tf.Session() as sess:
    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})

To further understand what your input and output layers are, you need to check them out with tensorboard, simply add the following line of code into your session:要进一步了解您的输入和输出层是什么,您需要使用 tensorboard 检查它们,只需将以下代码行添加到您的会话中:

tf.summary.FileWriter("path/to/folder/to/save/logs", sess.graph)

This line will create a log file that you can open with the cli/powershell, to see how to run tensorboard, check out this previously posted question此行将创建一个日志文件,您可以使用 cli/powershell 打开该文件,以了解如何运行 tensorboard,请查看之前发布的问题

Like to add, frozen_graph.pb includes two things: 1. Graph definition 2. Trained parameters补充一下,frozen_graph.pb 包括两件事:1. Graph 定义 2. 训练参数

Whereas save_model.pb, just have graph definition.而save_model.pb,只有图形定义。

That's why if you check the size of both the .pb files, frozen_graph.pb always be larger in size.这就是为什么如果您检查两个 .pb 文件的大小,frozen_graph.pb 的大小总是更大。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 定制的frozen_inference_graph.pb 不起作用? - Custom made frozen_inference_graph.pb not working? 如何使用来自 Google AutoML Vision Classification 的 TensorFlow Frozen GraphDef (single saved_model.pb) 进行推理和迁移学习 - How to do Inference and Transfer Learning with TensorFlow Frozen GraphDef (single saved_model.pb) from Google AutoML Vision Classification 如何从 Tensorflow Objectdetection 2 中的检查点导出 frozen_inference_graph.pb - How to export frozen_inference_graph.pb from a checkpoint in Tensorflow Objectdetection 2 如何在其他PC上使用Tensorflow(GPU)的Frozen_inference_graph.pb文件? - How to use frozen_inference_graph.pb file of tensorflow (GPU) at some other PC? 如何正确创建saved_model.pb? - How to correctly create saved_model.pb? 将 tf1.x saved_model.pb 重新保存到新的 tf2.0 saved_model.pb - resave tf1.x saved_model.pb into new tf2.0 saved_model.pb keras.models.save_model 中的 saved_model.pb 是否与 tensorflow freeze_graph output.pb 文件相同? - Is saved_model.pb from keras.models.save_model the same with tensorflow freeze_graph output .pb file? 将 saved_model.pb 转换为 model.tflite - Converting saved_model.pb to model.tflite 如何从 tf.Session 保存到 saved_model.pb? - How to save to saved_model.pb from tf.Session? 在ML引擎上进行训练后,获取save_model.pb的路径 - Get the path of saved_model.pb after training on ML engine
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM