简体   繁体   English

如何在不使用 Estimator 的情况下为 Tensorflow model 编写服务输入 function?

[英]How to write serving input function for Tensorflow model trained without using Estimators?

I have a model trained on a single machine without using Estimator and I'm looking to serve the final trained model on Google cloud AI platform (ML engine).我有一个 model 在没有使用 Estimator 的情况下在一台机器上训练,我希望在谷歌云人工智能平台(ML 引擎)上为最终训练的 model 提供服务。 I exported the frozen graph as a SavedModel using SavedModelBuilder and deployed it on the AI platform.我使用SavedModelBuilder将冻结图导出为 SavedModel,并将其部署到 AI 平台上。 It works fine for small input images but for it to be able to accept large input images for online prediction, I need to change it to accept b64 encoded strings ( {'image_bytes': {'b64': base64.b64encode(jpeg_data).decode()}} ) which are converted to the required tensor by a serving_input_fn if using Estimators.它适用于小型输入图像,但为了能够接受大型输入图像进行在线预测,我需要将其更改为接受 b64 编码字符串( {'image_bytes': {'b64': base64.b64encode(jpeg_data).decode()}} ),如果使用 Estimator,则由serving_input_fn转换为所需的张量。

What options do I have if I am not using an Estimator?如果我不使用 Estimator,我有哪些选择? If I have a frozen graph or SavedModel being created from SavedModelBuilder, is there a way to have something similar to an estimator's serving_input_fn when exporting/ saving?如果我有一个冻结的图形或从 SavedModelBuilder 创建的 SavedModel,在导出/保存时是否有一种类似于估算器的serving_input_fn的方法?

Here's the code I'm using for exporting:这是我用于导出的代码:

from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants

export_dir = 'serving_model/'
graph_pb = 'model.pb'

builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

with tf.gfile.GFile(graph_pb, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

sigs = {}

with tf.Session(graph=tf.Graph()) as sess:
    # name="" is important to ensure we don't get spurious prefixing
    tf.import_graph_def(graph_def, name="")
    g = tf.get_default_graph()

    inp = g.get_tensor_by_name("image_bytes:0")
    out_f1 = g.get_tensor_by_name("feature_1:0")
    out_f2 = g.get_tensor_by_name("feature_2:0")

    sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
        tf.saved_model.signature_def_utils.predict_signature_def(
            {"image_bytes": inp}, {"f1": out_f1, "f2": out_f2})

    builder.add_meta_graph_and_variables(sess,
                                         [tag_constants.SERVING],
                                         strip_default_attrs=True,
                                         signature_def_map=sigs)

builder.save()

Use a @tf.function to specify a serving signature. 使用@ tf.function指定服务签名。 Here's an example that calls Keras: 这是调用Keras的示例:

class ExportModel(tf.keras.Model):
    def __init__(self, model):
        super().__init__(self)
        self.model = model

    @tf.function(input_signature=[
        tf.TensorSpec([None,], dtype='int32', name='a'),
        tf.TensorSpec([None,], dtype='int32', name='b')
    ])
    def serving_fn(self, a, b):
        return {
            'pred' : self.model({'a': a, 'b': b}) #, steps=1)
        }

    def save(self, export_path):
        sigs = {
            'serving_default' : self.serving_fn
        }
        tf.keras.backend.set_learning_phase(0) # inference only
        tf.saved_model.save(self, export_path, signatures=sigs)

sm = ExportModel(model)
sm.save(EXPORT_PATH)

First, load your already exported SavedModel with首先,加载您已经导出的 SavedModel

import tensorflow as tf
loaded_model = tf.saved_model.load(MODEL_DIR)

Then, wrap it with a new Keras model that takes base64 input然后,用一个新的 Keras model 包装它,它需要 base64 输入

class Base64WrapperModel(tf.keras.Model):
  def __init__(self, model):
    super(Base64WrapperModel, self).__init__()
    self.inner_model = model

  @tf.function
  def call(self, base64_input):
    str_input = tf.io.decode_base64(base64_input)
    return self.inner_model(str_input)

wrapper_model = Base64WrapperModel(loaded_model)

Finally, save your wrapped model with Keras API最后,将包装好的 model 保存为 Keras API

wrapper_model.save(EXPORT_DIR)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用图像推断 tensorflow 预训练部署 model? - How to infer a tensorflow pre trained deployed model with an image? 如何在树莓派上运行使用 GCP 训练的 TFLITE model - How to run a TFLITE model trained using GCP on raspberry Pi 如何使用在不同输入数据上训练的多个 ml 模型生成一个 model 并在 Sagemaker 中给出预测? - How to use multiple ml models trained on different input data to produce one model and give prediction in Sagemaker? 在 AWS SageMaker 中使用预处理和后处理创建和部署预训练 tensorflow model - Creating and deploying pre-trained tensorflow model with pre-processing and post-processing in AWS SageMaker 如何在 Python lambda 中安装 TensorFlow Serving Client API? - How to fit TensorFlow Serving Client API in a Python lambda? 如何使用新训练的 Model 更新 Sagemaker Endpoint? - How to update Sagemaker Endpoint with the newly Trained Model? 在 Sagemaker 和 Huggingface 中训练一个已经训练过的 model 而无需重新初始化 - Train an already trained model in Sagemaker and Huggingface without re-initialising 如何将重新训练的 Sagemaker model 部署到端点? - How can I deploy a re-trained Sagemaker model to an endpoint? 服务模型中的 Kubeflow 管道 - Kubeflow Pipeline in serving model 使用 tensorflow 将 cnn model 部署到 flask? - deploying cnn model to flask using tensorflow?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM