简体   繁体   English

TF2:为张量流服务添加预处理到预训练保存模型(扩展保存模型的图形)

[英]TF2: Add preprocessing to pretrained saved model for tensorflow serving (Extending the graph of a savedModel)

I upgraded to TensorFlow 2 and now I am facing a problem when extending a pre-trained model with some additional preprocessing.我升级到 TensorFlow 2,现在我在使用一些额外的预处理扩展预训练模型时遇到了问题。

I have a pre-trained object detection model (SSD ResNet50 FPN), which I want to deploy to TensorFlow serve.我有一个预训练的对象检测模型 (SSD ResNet50 FPN),我想将其部署到 TensorFlow 服务。 I want to load the SavedModel and add the necessary preprocessing to accept base64 encoded jpegs directly.我想加载 SavedModel 并添加必要的预处理以直接接受 base64 编码的 jpeg。 I did this before with a TF 1.x and another Keras model, which works:我之前用 TF 1.x 和另一个 Keras 模型做过这个,它可以工作:

string_inp = tf.placeholder(tf.string, shape=(None,), name='base64_in')
imgs_map = tf.map_fn(
    tf.image.decode_image,
    string_inp,
    dtype=tf.uint8
)
imgs_map.set_shape((None, None, None, 3))
imgs = tf.image.resize_images(imgs_map, [456, 456], method=tf.image.ResizeMethod.BILINEAR)
imgs = tf.reshape(imgs, (-1, 456, 456, 3))
img_uint8 = tf.image.convert_image_dtype(imgs, dtype=tf.uint8, saturate=False)

pretrained_model= load_model('my-keras-model.h5', compile=False)
ouput_tensor= pretrained_model(img_uint8)

signature = tf.saved_model.signature_def_utils.predict_signature_def(                                                                        
    inputs={'jpegbase64': string_inp}, outputs={'probabilities': ouput_tensor})

builder.add_meta_graph_and_variables(                                                                                                        
    sess=K.get_session(),                                                                                                                    
    tags=[tf.saved_model.tag_constants.SERVING],                                                                                             
    signature_def_map={                                                                                                                      
        tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:                                                                
            signature                                                                                                                        
    })                                                                                                                                       
builder.save()

But once I try to get it workign with a SavedModel loaded with TF model = tf.keras.models.load_model("my_saved_model") it throws: TypeError: 'AutoTrackable' object is not callable但是一旦我尝试让它与加载有 TF model = tf.keras.models.load_model("my_saved_model")的 SavedModel 一起工作,它就会抛出:TypeError: 'AutoTrackable' object is not callable

I guess it does not support to stack the model on top of my custom input tensor, but I didn't find any other working solution for it.我猜它不支持将模型堆叠在我的自定义输入张量之上,但我没有找到任何其他可行的解决方案。 I also experimented with connecting the input tensor from the SavedModel directly with the img_uint8 tensor, but I don't know how I can get them connected correctly.我还尝试将 SavedModel 的输入张量直接与img_uint8张量连接起来,但我不知道如何才能正确连接它们。 Any ideas?有任何想法吗?

Ok, I found a solution, here we go:好的,我找到了解决方案,我们开始吧:

graph_model = tf.Graph()
sess = tf.Session(graph=graph_model)
sess.as_default()
graph_model.as_default()
model = tf.saved_model.load(sess, export_dir="myModel", tags=['serve'])
graph_model_def = graph_model.as_graph_def()

# here is the important step, create a new graph and DON'T create a new session explicity
graph_base64 = tf.Graph()
graph_base64.as_default()

string_inp = tf.placeholder(tf.string, shape=(None,), name='base64_in')
imgs_map = tf.map_fn(
    tf.image.decode_image,
    string_inp,
    dtype=tf.uint8
)
imgs_map.set_shape((None, None, None, 3))
imgs = tf.image.resize_images(imgs_map, [300, 300], method=tf.image.ResizeMethod.BILINEAR)
imgs = tf.reshape(imgs, (-1, 300, 300, 3))
img_uint8 = tf.image.convert_image_dtype(imgs, dtype=tf.uint8, saturate=False)

# import the model graph with the new input
tf.import_graph_def(graph_model_def, name='', input_map={"image_tensor:0": img_uint8})

The important part is to NOT create a new session.重要的部分是不要创建新会话。 If you do so, it won't work anymore.如果你这样做,它将不再起作用。 Here is a more detailed description. 这里有更详细的描述。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在TF2中获得预训练model的中间张量output - How to get the intermediate tensor output of pretrained model in TF2 使用 Tensorflow Serving 和 SavedModel Estimators 获取模型解释 - Getting Model Explanations with Tensorflow Serving and SavedModel Estimators Convert a Tensorflow model in SavedModel format (.pb file) saved with tf.saved_model.save to a Keras model (.h5 file) - Convert a Tensorflow model in SavedModel format (.pb file) saved with tf.saved_model.save to a Keras model (.h5 file) Tensorflow 2.0:在保存的模型中添加图像预处理步骤 - Tensorflow 2.0: Add image preprocessing step in a saved model 如何向预训练的 caffe model 添加预处理层? - How to add a preprocessing layer to a pretrained caffe model? 从 TF2 中的 SavedModel 获取操作 - Fetching an op from a SavedModel in TF2 在 TF2 中将 Keras RNN 模型转换为 TensorFlow Lite 模型 - Converting Keras RNN model to TensorFlow Lite model in TF2 将TF2 Object检测API model转换为冻结图 - Converting TF2 Object detection API model to frozen graph tf.transform:向Keras模型添加预处理? - tf.transform: add preprocessing to Keras model? 使用 Keras 和 TF2 加载保存的自定义 BERT model 后如何访问 tokenzier? - How to get access to tokenzier after loading a saved custom BERT model using Keras and TF2?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM