简体   繁体   English

如何将 Keras.h5 导出为 tensorflow.pb?

[英]How to export Keras .h5 to tensorflow .pb?

I have fine-tuned inception model with a new dataset and saved it as ".h5" model in Keras. now my goal is to run my model on android Tensorflow which accepts ".pb" extension only.我用新数据集微调了 inception model,并将其保存为“.h5”model in Keras。现在我的目标是在 android Tensorflow 上运行我的 model,它只接受扩展名“.pb” question is that is there any library in Keras or tensorflow to do this conversion?问题是 Keras 或 tensorflow 中是否有任何库可以进行此转换? I have seen this post so far: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html but can't figure out yet.到目前为止我已经看过这篇文章: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html但还不能弄清楚。

Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. Keras 本身不包含任何将 TensorFlow 图导出为协议缓冲区文件的方法,但您可以使用常规的 TensorFlow 实用程序来实现。 Here is a blog post explaining how to do it using the utility script freeze_graph.py included in TensorFlow, which is the "typical" way it is done. 是一篇博客文章,解释了如何使用 TensorFlow 中包含的实用程序脚本freeze_graph.py来执行此操作,这是完成此操作的“典型”方式。

However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this:但是,我个人觉得必须创建检查点然后运行外部脚本来获取模型很麻烦,而更喜欢从我自己的 Python 代码中执行此操作,因此我使用了这样的函数:

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    """
    Freezes the state of a session into a pruned computation graph.

    Creates a new computation graph where variable nodes are replaced by
    constants taking their current value in the session. The new graph will be
    pruned so subgraphs that are not necessary to compute the requested
    outputs are removed.
    @param session The TensorFlow session to be frozen.
    @param keep_var_names A list of variable names that should not be frozen,
                          or None to freeze all the variables in the graph.
    @param output_names Names of the relevant graph outputs.
    @param clear_devices Remove the device directives from the graph for better portability.
    @return The frozen graph definition.
    """
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = tf.graph_util.convert_variables_to_constants(
            session, input_graph_def, output_names, freeze_var_names)
        return frozen_graph

Which is inspired in the implementation of freeze_graph.py .这是在freeze_graph.py的实现中受到启发的。 The parameters are similar to the script too.参数也与脚本类似。 session is the TensorFlow session object. session是 TensorFlow 会话对象。 keep_var_names is only needed if you want to keep some variable not frozen (eg for stateful models), so generally not. keep_var_names仅在您想保持某些变量不被冻结时才需要(例如对于有状态模型),因此通常不需要。 output_names is a list with the names of the operations that produce the outputs that you want. output_names是一个列表,其中包含产生所需输出的操作名称。 clear_devices just removes any device directives to make the graph more portable. clear_devices只是删除任何设备指令,使图形更便携。 So, for a typical Keras model with one output, you would do something like:因此,对于具有一个输出的典型 Keras model ,您可以执行以下操作:

from keras import backend as K

# Create, compile and train model...

frozen_graph = freeze_session(K.get_session(),
                              output_names=[out.op.name for out in model.outputs])

Then you can write the graph to a file as usual withtf.train.write_graph :然后,您可以像往常一样使用tf.train.write_graph将图形写入文件:

tf.train.write_graph(frozen_graph, "some_directory", "my_model.pb", as_text=False)

The freeze_session method works fine. freeze_session 方法工作正常。 But compared to saving to a checkpoint file then using the freeze_graph tool that comes with TensorFlow seems simpler to me, as it's easier to maintain.但是与保存到检查点文件相比,然后使用 TensorFlow 附带的 freeze_graph 工具对我来说似乎更简单,因为它更易于维护。 All you need to do is the following two steps:您需要做的就是以下两个步骤:

First, add after your Keras code model.fit(...) and train your model:首先,在您的model.fit(...)代码之后添加model.fit(...)并训练您的模型:

from keras import backend as K
import tensorflow as tf
print(model.output.op.name)
saver = tf.train.Saver()
saver.save(K.get_session(), '/tmp/keras_model.ckpt')

Then cd to your TensorFlow root directory, run:然后 cd 到你的 TensorFlow 根目录,运行:

python tensorflow/python/tools/freeze_graph.py \
--input_meta_graph=/tmp/keras_model.ckpt.meta \
--input_checkpoint=/tmp/keras_model.ckpt \
--output_graph=/tmp/keras_frozen.pb \
--output_node_names="<output_node_name_printed_in_step_1>" \
--input_binary=true

Update for Tensorflow 2 Tensorflow 2 的更新

Saving everything into a single archive in the TensorFlow SavedModel format (contains saved_model.pb file):以 TensorFlow SavedModel格式(包含saved_model.pb文件)将所有内容保存到单个存档中:

model = ...  # Get model (Sequential, Functional Model, or Model subclass)
model.save('path/to/location')

or in the older Keras H5 format:或使用较旧的 Keras H5格式:

model = ...  # Get model (Sequential, Functional Model, or Model subclass)
model.save('model.h5')

The recommended format is SavedModel .推荐的格式是SavedModel

Loading the model back:加载模型:

from tensorflow import keras
model = keras.models.load_model('path/to/location')
model = keras.models.load_model('model.h5')

A SavedModel contains a complete TensorFlow program, including trained parameters (ie, tf.Variables ) and computation. SavedModel包含完整的 TensorFlow 程序,包括训练参数(即tf.Variables )和计算。 It does not require the original model building code to run, which makes it useful for sharing or deploying with TFLite , TensorFlow.js , TensorFlow Serving , or TensorFlow Hub .它不需要运行原始模型构建代码,这使其可用于与TFLiteTensorFlow.jsTensorFlow ServingTensorFlow Hub共享或部署。


Example for Tensorflow 2 Tensorflow 2 的示例

The following simple example (XOR example) shows how to export Keras models (in both h5 format and pb format), and using the model in Python and C++:以下简单示例(XOR 示例)展示了如何导出 Keras 模型( h5格式和pb格式),以及在 Python 和 C++ 中使用模型:


train.py:火车.py:

import numpy as np
import tensorflow as tf

print(tf.__version__)  # 2.4.1

x_train = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], 'float32')
y_train = np.array([[0], [1], [1], [0]], 'float32')

inputs = tf.keras.Input(shape=(2,), name='input')
x = tf.keras.layers.Dense(64, activation='relu')(inputs)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(64, activation="relu")(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x)

model = tf.keras.Model(inputs=inputs, outputs=outputs, name='xor')

model.summary()

model.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy'])

model.fit(x_train, y_train, epochs=100)

model.save('./xor/')  # SavedModel format

model.save('./xor.h5')  # Keras H5 format

After run the above script:运行上述脚本后:

.
├── train.py
├── xor
│   ├── assets
│   ├── saved_model.pb
│   └── variables
│       ├── variables.data-00000-of-00001
│       └── variables.index
└── xor.h5

predict.py:预测.py:

import numpy as np
import tensorflow as tf

print(tf.__version__)  # 2.4.1

model = tf.keras.models.load_model('./xor/')  # SavedModel format
# model = tf.keras.models.load_model('./xor.h5')  # Keras H5 format

# 0 xor 0 = [[0.11921611]] ~= 0
print('0 xor 0 = ', model.predict(np.array([[0, 0]])))

# 0 xor 1 = [[0.96736085]] ~= 1
print('0 xor 1 = ', model.predict(np.array([[0, 1]])))

# 1 xor 0 = [[0.97254556]] ~= 1
print('1 xor 0 = ', model.predict(np.array([[1, 0]])))

# 1 xor 1 = [[0.0206149]] ~= 0
print('1 xor 1 = ', model.predict(np.array([[1, 1]])))

Convert Model to ONNX:将模型转换为 ONNX:

ONNX is a new standard for exchanging deep learning models. ONNX是交换深度学习模型的新标准。 It promises to make deep learning models portable thus preventing vendor lock in.它承诺使深度学习模型具有可移植性,从而防止供应商锁定。

ONNX is an open format built to represent machine learning models. ONNX是一种开放格式,用于表示机器学习模型。 ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. ONNX定义了一组通用运算符——机器学习和深度学习模型的构建块——以及一种通用文件格式,使 AI 开发人员能够使用具有各种框架、工具、运行时和编译器的模型。

$ pip install onnxruntime
$ pip install tf2onnx
$ python -m tf2onnx.convert --saved-model ./xor/ --opset 9 --output xor.onnx

# INFO - Successfully converted TensorFlow model ./xor/ to ONNX
# INFO - Model inputs: ['input:0']
# INFO - Model outputs: ['output']
# INFO - ONNX model is saved at xor.onnx

By specifying --opset the user can override the default to generate a graph with the desired opset.通过指定--opset用户可以覆盖默认值以生成具有所需 opset 的图形。 For example --opset 13 would create a onnx graph that uses only ops available in opset 13. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.例如,-- --opset 13将创建一个仅使用 opset 13 中可用的操作的 onnx 图。由于在大多数情况下,较旧的操作集具有较少的操作,因此某些模型可能无法在较旧的操作集上进行转换。


opencv-predict.py: opencv-predict.py:

import numpy as np
import cv2

print(cv2.__version__)  # 4.5.1

model = cv2.dnn.readNetFromONNX('./xor.onnx')

# 0 xor 0 = [[0.11921611]] ~= 0
model.setInput(np.array([[0, 0]]), name='input:0')
print('0 xor 0 = ', model.forward(outputName='output'))

# 0 xor 1 = [[0.96736085]] ~= 1
model.setInput(np.array([[0, 1]]), name='input:0')
print('0 xor 1 = ', model.forward(outputName='output'))

# 1 xor 0 = [[0.97254556]] ~= 1
model.setInput(np.array([[1, 0]]), name='input:0')
print('1 xor 0 = ', model.forward(outputName='output'))

# 1 xor 1 = [[0.02061491]] ~= 0
model.setInput(np.array([[1, 1]]), name='input:0')
print('1 xor 1 = ', model.forward(outputName='output'))

predict.cpp:预测.cpp:

#include <cstdlib>
#include <iostream>
#include <opencv2/opencv.hpp>

int main(int argc, char **argv)
{
    std::cout << CV_VERSION << std::endl; // 4.2.0

    cv::dnn::Net net;

    net = cv::dnn::readNetFromONNX("./xor.onnx");

    // 0 xor 0 = [0.11921611] ~= 0
    float x0[] = { 0, 0 };
    net.setInput(cv::Mat(1, 2, CV_32F, x0), "input:0");
    std::cout << "0 xor 0 = " << net.forward("output") << std::endl;

    // 0 xor 1 = [0.96736085] ~= 1
    float x1[] = { 0, 1 };
    net.setInput(cv::Mat(1, 2, CV_32F, x1), "input:0");
    std::cout << "0 xor 1 = " << net.forward("output") << std::endl;

    // 1 xor 0 = [0.97254556] ~= 1
    float x2[] = { 1, 0 };
    net.setInput(cv::Mat(1, 2, CV_32F, x2), "input:0");
    std::cout << "1 xor 0 = " << net.forward("output") << std::endl;

    // 1 xor 1 = [0.020614909] ~= 0
    float x3[] = { 1, 1 };
    net.setInput(cv::Mat(1, 2, CV_32F, x3), "input:0");
    std::cout << "1 xor 1 = " << net.forward("output") << std::endl;

    return EXIT_SUCCESS;
}

Compile and Run:编译并运行:

$ sudo apt install build-essential pkg-config libopencv-dev
$ g++ predict.cpp `pkg-config --cflags --libs opencv4` -o predict
$ ./predict

Original Answer原答案

The following simple example (XOR example) shows how to export Keras models (in both h5 format and pb format), and using the model in Python and C++:以下简单示例(XOR 示例)展示了如何导出 Keras 模型( h5格式和pb格式),以及在 Python 和 C++ 中使用模型:


train.py:火车.py:

import numpy as np
import tensorflow as tf


def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    """
    Freezes the state of a session into a pruned computation graph.

    Creates a new computation graph where variable nodes are replaced by
    constants taking their current value in the session. The new graph will be
    pruned so subgraphs that are not necessary to compute the requested
    outputs are removed.
    @param session The TensorFlow session to be frozen.
    @param keep_var_names A list of variable names that should not be frozen,
                          or None to freeze all the variables in the graph.
    @param output_names Names of the relevant graph outputs.
    @param clear_devices Remove the device directives from the graph for better portability.
    @return The frozen graph definition.
    """
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ''
        frozen_graph = tf.graph_util.convert_variables_to_constants(
            session, input_graph_def, output_names, freeze_var_names)
        return frozen_graph


X = np.array([[0,0], [0,1], [1,0], [1,1]], 'float32')
Y = np.array([[0], [1], [1], [0]], 'float32')

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(64, input_dim=2, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

model.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy'])

model.fit(X, Y, batch_size=1, nb_epoch=100, verbose=0)

# inputs:  ['dense_input']
print('inputs: ', [input.op.name for input in model.inputs])

# outputs:  ['dense_4/Sigmoid']
print('outputs: ', [output.op.name for output in model.outputs])

model.save('./xor.h5')

frozen_graph = freeze_session(tf.keras.backend.get_session(), output_names=[out.op.name for out in model.outputs])
tf.train.write_graph(frozen_graph, './', 'xor.pbtxt', as_text=True)
tf.train.write_graph(frozen_graph, './', 'xor.pb', as_text=False)

predict.py:预测.py:

import numpy as np
import tensorflow as tf

model = tf.keras.models.load_model('./xor.h5')

# 0 ^ 0 =  [[0.01974997]]
print('0 ^ 0 = ', model.predict(np.array([[0, 0]])))

# 0 ^ 1 =  [[0.99141496]]
print('0 ^ 1 = ', model.predict(np.array([[0, 1]])))

# 1 ^ 0 =  [[0.9897714]]
print('1 ^ 0 = ', model.predict(np.array([[1, 0]])))

# 1 ^ 1 =  [[0.00406971]]
print('1 ^ 1 = ', model.predict(np.array([[1, 1]])))

opencv-predict.py: opencv-predict.py:

import numpy as np
import cv2 as cv


model = cv.dnn.readNetFromTensorflow('./xor.pb')

# 0 ^ 0 =  [[0.01974997]]
model.setInput(np.array([[0, 0]]), name='dense_input')
print('0 ^ 0 = ', model.forward(outputName='dense_4/Sigmoid'))

# 0 ^ 1 =  [[0.99141496]]
model.setInput(np.array([[0, 1]]), name='dense_input')
print('0 ^ 1 = ', model.forward(outputName='dense_4/Sigmoid'))

# 1 ^ 0 =  [[0.9897714]]
model.setInput(np.array([[1, 0]]), name='dense_input')
print('1 ^ 0 = ', model.forward(outputName='dense_4/Sigmoid'))

# 1 ^ 1 =  [[0.00406971]]
model.setInput(np.array([[1, 1]]), name='dense_input')
print('1 ^ 1 = ', model.forward(outputName='dense_4/Sigmoid'))

predict.cpp:预测.cpp:

#include <cstdlib>
#include <iostream>
#include <opencv2/opencv.hpp>

int main(int argc, char **argv)
{
    cv::dnn::Net net;

    net = cv::dnn::readNetFromTensorflow("./xor.pb");

    // 0 ^ 0 = [0.018541215]
    float x0[] = { 0, 0 };
    net.setInput(cv::Mat(1, 2, CV_32F, x0), "dense_input");
    std::cout << "0 ^ 0 = " << net.forward("dense_4/Sigmoid") << std::endl;

    // 0 ^ 1 = [0.98295897]
    float x1[] = { 0, 1 };
    net.setInput(cv::Mat(1, 2, CV_32F, x1), "dense_input");
    std::cout << "0 ^ 1 = " << net.forward("dense_4/Sigmoid") << std::endl;

    // 1 ^ 0 = [0.98810625]
    float x2[] = { 1, 0 };
    net.setInput(cv::Mat(1, 2, CV_32F, x2), "dense_input");
    std::cout << "1 ^ 0 = " << net.forward("dense_4/Sigmoid") << std::endl;

    // 1 ^ 1 = [0.010002014]
    float x3[] = { 1, 1 };
    net.setInput(cv::Mat(1, 2, CV_32F, x3), "dense_input");
    std::cout << "1 ^ 1 = " << net.forward("dense_4/Sigmoid") << std::endl;

    return EXIT_SUCCESS;
}

At this time, all above older answers are outdated.目前,以上所有较旧的答案都已过时。 As of Tensorflow 2.1从 Tensorflow 2.1 开始

from tensorflow.keras.models import Model, load_model
model = load_model(MODEL_FULLPATH)
model.save(MODEL_FULLPATH_MINUS_EXTENSION)

will create a folder with a 'saved_model.pb' inside将创建一个包含“saved_model.pb”的文件夹

There is a very important point when you want to convert to tensorflow.当您想转换为 tensorflow 时,有一个非常重要的点。 If you use dropout, batch normalization or any other layers like these (which have not trainable but calculating values), you should change the learning phase of keras backend .如果您使用 dropout、batch normalization 或任何其他类似的层(它们不可训练但计算值),您应该更改 keras backend 的学习阶段 Here is a discussion about it.这是关于它的讨论

import keras.backend as K
k.set_learning_phase(0) # 0 testing, 1 training mode

This solution worked for me.这个解决方案对我有用。 Courtesy to https://medium.com/tensorflow/training-and-serving-ml-models-with-tf-keras-fd975cc0fa27感谢https://medium.com/tensorflow/training-and-serving-ml-models-with-tf-keras-fd975cc0fa27

import tensorflow as tf

# The export path contains the name and the version of the model
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = tf.keras.models.load_model('./model.h5')
export_path = './PlanetModel/1'

# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors
# And stored with the default serving key
with tf.keras.backend.get_session() as sess:
    tf.saved_model.simple_save(
        sess,
        export_path,
        inputs={'input_image': model.input},
        outputs={t.name:t for t in model.outputs})

Please use tf.saved_model.simple_save , some example codes:请使用tf.saved_model.simple_save ,一些示例代码:

with tf.keras.backend.get_session() as sess:
    tf.saved_model.simple_save(
        sess,
        export_path,
        inputs={'input': keras_model.input},
        outputs={'output': keras_model.output})

===update==== ===更新====

You can use as_a_saved_model , example codes:您可以使用as_a_saved_model ,示例代码:

saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")

If you want the model only for inference, you should first freeze the graph and then write it as a .pb file.如果您希望模型仅用于推理,您应该先冻结图形,然后将其写为.pb文件。 The code snippet looks like this ( code borrowed from here ):代码片段看起来像这样(从这里借用的代码):

import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
import keras
from keras import backend as K

sess = K.get_session()

constant_graph = graph_util.convert_variables_to_constants(
        sess,
        sess.graph.as_graph_def(),
        ["name_of_the_output_graph_node"])

graph_io.write_graph(constant_graph, "path/to/output/folder", 
                     "output_model_name", as_text=False)

You can do the above using the keras_to_tensorflow tool: https://github.com/amir-abdi/keras_to_tensorflow您可以使用keras_to_tensorflow工具执行上述操作: https : //github.com/amir-abdi/keras_to_tensorflow

The keras_to_tensorflow tool takes care of the above operations, with some extra features for a more diverse solution. keras_to_tensorflow工具负责上述操作,并提供一些额外功能以提供更多样化的解决方案。 Just call it with the correct input arguments (eg input_model and output_model flags).只需使用正确的输入参数(例如input_modeloutput_model标志)调用它。

If you want to retrain the model in tensorflow, use the above tool with the output_meta_ckpt flag to export checkpoints and meta graphs.如果要在 tensorflow 中重新训练模型,请使用带有output_meta_ckpt标志的上述工具来导出检查点和元图。

using estimator.export_savedmodel we can easily convert h5 model to saved model.使用 estimator.export_savedmodel 我们可以轻松地将 h5 模型转换为保存的模型。 check doc here https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator在此处查看文档https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator

def prepare_image(image_str_tensor):
    image_contents = tf.read_file(image_str_tensor)
    image = tf.image.decode_jpeg(image_contents, channels=3)
    image = tf.image.resize_images(image, [224, 224])
    image = tf.cast(image, tf.float32)
    return preprocess_input(image)

def serving_input_receiver_fn():
    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
          prepare_image, input_ph, back_prop=False, dtype=tf.float32)
    images_tensor = tf.image.convert_image_dtype(images_tensor, 
                      dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver({"input": images_tensor}, 
             {'image_url': input_ph})

estimator = tf.keras.estimator.model_to_estimator(
    keras_model_path=h5_model_path
)

estimator.export_savedmodel(saved_model_path, serving_input_receiver_fn=serving_input_receiver_fn)

Tensorflow tf.saved_model api is best for generating pb model Tensorflow tf.saved_model api 最适合生成 pb 模型

If you have h5 model then load it through keras load_model如果您有 h5 模型,则通过 keras load_model 加载它

from tensorflow import keras
model = keras.models.load_model("model.h5")

Save tensorflow model through saved_model api, It will save the model in pb format.通过saved_model api 保存tensorflow 模型,它会以pb 格式保存模型。 This model will have required meta data for serving it through Google Ai Platform.此模型需要元数据才能通过 Google Ai Platform 提供服务。 So you can upload the directory to Ai Platform for serving your model.因此,您可以将目录上传到 Ai Platform 来为您的模型提供服务。

import tensorflow as tf
tf.saved_model.save(model, './directory-to-save-file/')

tf 2.2.0 TF 2.2.0

import tensorflow.keras instead of just keras, because it will load your model as keras.engine.sequential.Sequential object which cannot be directly convertible into tensorflow .pb format导入 tensorflow.keras 而不仅仅是 keras,因为它会将您的模型加载为 keras.engine.sequential.Sequential 对象,该对象不能直接转换为 tensorflow .pb 格式

#import keras
import tensorflow.keras as keras
model = keras.models.load_model(load_path)
model.save(save_path)

With tensorflow 2.x : If you want to save only the graph definition in pbtxt then use the below code.使用tensorflow 2.x :如果您只想在pbtxt保存图形定义, pbtxt使用以下代码。

import tensorflow as tf
keras_model = ...
tf.io.write_graph(
  keras_model.output.graph,
  'model_dir',
  'model.pbtxt',
  as_text=True,
)

In the case of users trying to convert a Mask-RCNN model/weights into a frozen graph, most answers here won't suffice.在用户尝试将 Mask-RCNN 模型/权重转换为冻结图的情况下,这里的大多数答案都不够。

This can be done while saving the model ( .h5 ) weights in the mrcnn/model.py file.这可以在将模型 ( .h5 ) 权重保存在mrcnn/model.py文件中时完成。 Just need to make the following changes ( git diff )只需要进行以下更改( git diff

+    def freeze_session(self, session, keep_var_names=None, output_names=None, clear_devices=True):
+        """
+        Freezes the state of a session into a pruned computation graph.
+
+        Creates a new computation graph where variable nodes are replaced by
+        constants taking their current value in the session. The new graph will be
+        pruned so subgraphs that are not necessary to compute the requested
+        outputs are removed.
+        @param session The TensorFlow session to be frozen.
+        @param keep_var_names A list of variable names that should not be frozen,
+                              or None to freeze all the variables in the graph.
+        @param output_names Names of the relevant graph outputs.
+        @param clear_devices Remove the device directives from the graph for better portability.
+        @return The frozen graph definition.
+        """
+        graph = session.graph
+        with graph.as_default():
+            freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
+            output_names = output_names or []
+            output_names += [v.op.name for v in tf.global_variables()]
+            input_graph_def = graph.as_graph_def()
+            if clear_devices:
+                for node in input_graph_def.node:
+                    node.device = ""
+            frozen_graph = tf.graph_util.convert_variables_to_constants(
+                session, input_graph_def, output_names, freeze_var_names)
+            return frozen_graph
+
     def train(self, train_dataset, val_dataset, learning_rate, epochs, layers,
               augmentation=None, custom_callbacks=None, no_augmentation_sources=None):
         """Train the model.
@@ -2373,6 +2401,12 @@ class MaskRCNN():
             workers=workers,
             use_multiprocessing=True,
         )
+        #######using session and saving .pb file##
+        frozen_graph = self.freeze_session(K.get_session(),
+                              output_names=[out.op.name for out in self.keras_model.outputs])
+        print('\n\n\t\t******* Writing Frozen Graph in logs directory *******\n\n')
+        tf.train.write_graph(frozen_graph, self.log_dir, "my_model.pb", as_text=False)
+
         self.epoch = max(self.epoch, epochs)

The complete file can be found HERE .完整的文件可以在这里找到。 With it, I was able to convert ResNet50 and ResNet101 backbones for both coco as well as imagenet weights.有了它,我能够为 coco 和 imagenet 权重转换 ResNet50 和 ResNet101 主干。

In my case, I was trying to convert dar.net weights to a TensorFlow model and I needed the model in.pb format.就我而言,我试图将 dar.net 权重转换为 TensorFlow model 并且我需要 model in.pb 格式。 I tried so many solutions given here as well as on other forums, but I was finally able to fix it by upgrading my Tensorflow v2.2 to Tensorflow v2.3, and I could successfully save the model into a.pb format.我尝试了这里以及其他论坛上给出的许多解决方案,但我最终能够通过将我的 Tensorflow v2.2 升级到 Tensorflow v2.3 来修复它,并且我可以成功地将 model 保存为 a.pb 格式。

Here are the documentations for reference:以下是供参考的文档:

My imports:我的进口:

import tensorflow as tf
import tensorflow.keras as keras

Code that saves the model in.pb format:将model保存为.pb格式的代码:

model.save("/path to directory/")

Code that saves the model in.h5 format:将model保存为.h5格式的代码:

tf.keras.models.save_model(model = model, filepath, modelname.h5')

Note: I could only get this working when I upgraded the Tensorflow from version 2.2 to 2.3注意:只有当我将 Tensorflow 从 2.2 版升级到 2.3 版时,我才能让它工作

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 无法将 keras/tensorflow h5/json 转换为 tensorflow pb - Trouble converting keras/tensorflow h5/json into tensorflow pb 如何将具有自定义 keras 层(.h5)的 keras 模型冻结到张量流图(.pb)? - How to freeze a keras model with custom keras layers(.h5) to tensorflow graph(.pb)? 将keras h5转换为tensorflow pb以进行批处理推断 - convert keras h5 to tensorflow pb for batch inference 如何将预训练的 tensorflow pb 冻结图转换为可修改的 h5 keras 模型? - How to convert a pretrained tensorflow pb frozen graph into a modifiable h5 keras model? tf.keras h5到Tensorflow pb-即使输入明显具有输出的结果,pb也没有输出节点? - tf.keras h5 to Tensorflow pb - resulting pb lacks output node even though input clearly has it? 如何将 .h5 文件转换为 .pb 文件? - how to convert .h5 file to .pb file? 如何将 .pb 文件转换为 .h5。 (Tensorflow 模型到 keras) - How to convert .pb file to .h5. (Tensorflow model to keras) Convert a Tensorflow model in SavedModel format (.pb file) saved with tf.saved_model.save to a Keras model (.h5 file) - Convert a Tensorflow model in SavedModel format (.pb file) saved with tf.saved_model.save to a Keras model (.h5 file) 加载 Tensorflow keras Model (.h5) 时出错 - Error loading Tensorflow keras Model (.h5) Tensorflow 2,将.h5 model 转换为.tflite,缺少.pb 和.pbtext - Tensorflow 2, converting .h5 model to .tflite, missing .pb and .pbtext
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM