![](/img/trans.png)
[英]How can I convert a model trained in tensorflow 2 to a tensorflow 1 frozen graph
[英]How to convert frozen graph to TensorFlow lite
I have been trying to follow, https://www.tensorflow.org/lite/examples/object_detection/overview#model_customization all day to convert any of the tensorflow Zoo models to a TensorFlow lite model for running on Android with no luck.
我從這里下載了幾個模型, https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md (僅供參考,Chrome不會讓你失望這些鏈接不是Z5E056C500A1C4B6A7110B50D807BADE5必須右鍵單擊檢查鏈接並單擊檢查器中的鏈接)
我有劇本,
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file='frozen_graph.pb',
input_shapes = {'normalized_input_image_tensor':[1,300,300,3]},
input_arrays = ['normalized_input_image_tensor'],
output_arrays = ['TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1', 'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3']
)
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
但給出了錯誤,ValueError: Invalid tensors 'normalized_input_image_tensor' were found
所以行, input_shapes = {'normalized_input_image_tensor':[1,300,300,3]}, input_arrays = ['normalized_input_image_tensor'], output_arrays = ['TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1', 'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' ]
一定是錯的,需要不同的形狀,但是我如何為每個動物園模型得到這個,或者我需要先運行一些預轉換代碼嗎?
運行下面我得到的“代碼片段”,
--------------------------------------------------
Frozen model layers:
name: "add/y"
op: "Const"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_FLOAT
tensor_shape {
}
float_val: 1.0
}
}
}
Input layer: add/y
Output layer: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/NextIteration_1
--------------------------------------------------
但我不明白這將如何將 map 轉換為 input_shape 或幫助轉換?
甚至可以將諸如faster_rcnn_inception_v2_coco之類的模型轉換為tflite嗎? 我在某處讀到只支持 ssd 模型?
此代碼段
import tensorflow as tf
def print_layers(graph_def):
def _imports_graph_def():
tf.compat.v1.import_graph_def(graph_def, name="")
wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
import_graph = wrapped_import.graph
print("-" * 50)
print("Frozen model layers: ")
layers = [op.name for op in import_graph.get_operations()]
ops = import_graph.get_operations()
print(ops[0])
print("Input layer: ", layers[0])
print("Output layer: ", layers[-1])
print("-" * 50)
# Load frozen graph using TensorFlow 1.x functions
with tf.io.gfile.GFile("model.pb", "rb") as f:
graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(f.read())
frozen_func = print_layers(graph_def=graph_def)
打印輸入層的屬性,包括形狀,以及輸入層和 output 層的名稱:
--------------------------------------------------
Frozen model layers:
name: "image_tensor"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_UINT8
}
}
attr {
key: "shape"
value {
shape {
dim {
size: -1
}
dim {
size: -1
}
dim {
size: -1
}
dim {
size: 3
}
}
}
}
Input layer: image_tensor
Output layer: detection_classes
--------------------------------------------------
我認為這篇文章可以幫助你
這些模型是使用 TensorFlow 版本 1 制作的。因此您必須使用 saved_model 生成具體的 function(因為 TFLite 不喜歡動態輸入形狀),然后從那里轉換為 TFLite。
我會寫一個簡單的解決方案,你可以立即使用。
打開一個 colab notebook,它是免費的並且是在線的。 Go 到這個地址,然后點擊右下角的 New Notebook。
第一個單元格(在下面輸入並使用播放按鈕執行):
!wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz
!tar -xzvf "/content/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz" -C "/content/"
第二個單元格(輸入,執行):
import tensorflow as tf
print(tf.__version__)
第三個單元格(輸入,執行):
model = tf.saved_model.load('/content/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/saved_model')
concrete_func = model.signatures[
tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 300, 300, 3])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
with open('detect.tflite', 'wb') as f:
f.write(tflite_model)
下面的代碼是必要的,因為 TFLite 本身不支持一些操作:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
但是您還必須在此之后的移動項目中添加特定的依賴項。
如果您想減少一些 MB 的 tflite 文件並使其更小,請遵循 這些程序。
完成后,您將在左側看到一個detect.tflite model。
Go 到netron.app並復制粘貼文件或瀏覽上傳。 您將看到所有詳細信息:
如果我忘記了什么或者你需要更多的東西給我打電話。
快樂編碼。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.