简体   繁体   English

Tensorflow 2.0:如何在使用 tf.saved_model 时更改输出签名

[英]Tensorflow 2.0: How to change the output signature while using tf.saved_model

I would like to change the input and output signatures of the model saved, I used tf.Module objects to build the operations of the main model.我想更改保存的模型的输入和输出签名,我使用 tf.Module 对象来构建主模型的操作。

class Generator(tf.Module):
    def __init__(....):
        super(Generator, self).__init__(name=name)
        ...       
        with self.name_scope:
             ...
    @tf.Module.with_name_scope
    def __call__(self, input):
        ...

    @tf.function
    def serve_function(self, input):
        out = self.__call__(input)
        return out



call = model.Generator.serve_function.get_concrete_function(tf.TensorSpec([None, 256, 256, 3], tf.float32))
tf.saved_model.save(model.Generator, os.path.join(train_log_dir, 'frozen'))

then I am loading the model but I have as signatures 'default_serving' and 'output_0', how can I change this?然后我正在加载模型,但我有“default_serving”和“output_0”作为签名,我该如何更改?

I figured out a way to define the output signature without using tf.Module by defining a tf.function that returns a dictionary of outputs where the keys used in the dictionary will be the output names.我想出了一种不使用 tf.Module 定义输出签名的方法,方法是定义一个tf.function返回一个输出字典,其中字典中使用的键将是输出名称。

# Create the model
model = ...

# Train the model
model.fit(...)

# Define where to save the model
export_path = "..."

@tf.function()
def my_predict(my_prediction_inputs):
   inputs = {
        'my_serving_input': my_prediction_inputs,
   }
   prediction = model(inputs)
   return {"my_prediction_outputs": prediction}

my_signatures = my_predict.get_concrete_function(
   my_prediction_inputs=tf.TensorSpec([None,None], dtype=tf.dtypes.float32, name="my_prediction_inputs")
)

# Save the model.
tf.saved_model.save(
    model,
    export_dir=export_path,
    signatures=my_signatures
)

This produces the following signature:这会产生以下签名:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['my_prediction_inputs'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, -1)
        name: serving_default_my_prediction_inputs:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['my_prediction_outputs'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict

Another way of creating the serving_default signature is:创建serving_default 签名的另一种方法是:

import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text

export_dir = "./models/use/00000001"
module = hub.load("https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3")

@tf.function
def my_module_encoder(text):
   inputs = {
        'text': text,
   }
   outputs = {
        'embeddings': module(text)
   }
   return outputs

tf.saved_model.save(
    module, 
    export_dir, 
    signatures=my_module_encoder.get_concrete_function(
        text=tf.TensorSpec(shape=None, dtype=tf.string)
    ), 
    options=None
)

You can look at the created SignatureDefs signature using saved_model_cli command as below:您可以使用saved_model_cli命令查看创建的SignatureDefs签名,如下所示:

$ saved_model_cli show --all  --dir models/use/00000001

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['text'] tensor_info:
        dtype: DT_STRING
        shape: unknown_rank
        name: serving_default_text:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['embeddings'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 512)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 tf.saved_model 预测多个输入向量(tensorflow 2.0) - Use tf.saved_model to predict multiple input vectors (tensorflow 2.0) 使用 tf.saved_model 保存和加载 model 时会发生什么变化 - What changes when saving and loading a model using tf.saved_model 使用 TF 2.0 将 saved_model 转换为 TFLite 模型 - Converting saved_model to TFLite model using TF 2.0 尝试使用 tf.saved_model.load() 加载 tensorflow 时出现“读取的字节数少于请求的字节数”错误 - "Read less bytes than requested" error while trying to load tensorflow using tf.saved_model.load() 加载 model,使用 Tensorflow 2.0 保存,使用 Tensorflow 1.x - Loading a model, saved using Tensorflow 2.0, using Tensorflow 1.x 使用 Tensorflow 中保存的 model 进行推理 2:如何控制输入/输出? - Inference using saved model in Tensorflow 2: how to control in/output? 使用功能 API 和 tf.GradientTape() 的组合在 Tensorflow 2.0 中进行训练时,如何将模型图记录到张量板? - How to log model graph to tensorboard when using combination to Functional API and tf.GradientTape() to train in Tensorflow 2.0? TF2.0 中的 saved_model.prune() - saved_model.prune() in TF2.0 在 tensorflow 中使用保存的估计器时出现saved_model_cli 问题 - Issue with saved_model_cli while using saved estimator in tensorflow 如何部署使用 export_saved_model 保存的 TensorFlow 模型 - How to deploy a TensorFlow model that is saved using export_saved_model
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM