简体   繁体   中英

RaggedTensor request to TensorFlow serving fails

I've created a TensorFlow model that uses RaggedTensors. Model works fine and when calling model.predict and I get the expected results.

input = tf.ragged.constant([[[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], 
                                [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963], 
                                [-0.9984272718429565, -0.942145586013794, 2.48842453956604, -1.6836735010147095, -0.6360141634941101, -1.5785763263702393, -1.900200605392456, -0.49169546365737915, -0.6656315326690674, -0.6583622694015503, -0.5273991227149963], 
]])
model.predict(input)

>> array([[0.5138151 , 0.3277698 , 0.26122513]], dtype=float32)

I've deployed the model to a TensorFlow serving server and using the following code to invoke:

import json
import requests
headers = {"content-type": "application/json"}
data = json.dumps({"instances":[
    [-1.3523329846758267, ... more data ],
    [-1.3523329846758267, ... more data ],
    [-1.3523329846758267, ... more data ],
    [-1.3523329846758267, ... more data ,
    [-1.3523329846758267, ... more data ],
    [-1.3523329846758267, ... more data ],
    [-1.3523329846758267, ... more data ],
    [-1.3523329846758267, ... more data })
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)

But then I get the following error:

"instances is a plain list, but expecting list of objects as multiple input tensors required as per tensorinfo_map"

My model description:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['args_0'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 11)
        name: serving_default_args_0:0
    inputs['args_0_1'] tensor_info:
        dtype: DT_INT64
        shape: (-1)
        name: serving_default_args_0_1:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_2'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 3)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict
WARNING: Logging before flag parsing goes to stderr.
W0124 09:33:16.365564 140189730998144 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling __init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

Defined Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64)
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #2
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64)
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None

  Function Name: '_default_save_signature'
    Option #1
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64)

  Function Name: 'call_and_return_all_conditional_losses'
    Option #1
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64)
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #2
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64)
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None

What am I missing?

Update: After inspecting saved_model_cli output, I suspect I should send the request as an object like below, but I'm not sure about the inputs...

{
  "instances": [
    {
      "args_0": nested-list ?,
      "args_0_1": ???
    }
  ]
}

Update2 A Colab to test this scenario, a link to download the model is included in the Colab.

Update 3:

As suggested by @Niteya Shah I've called the API with:

data = json.dumps({
 "inputs": {
   "args_0": [[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], 
              [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963]],
   "args_0_1": [1, 2]  #Please Check what inputs come here ?
  }
})

And got the results (Finally:):

{'outputs': [[0.466771603, 0.455221593, 0.581544757]]}

Then called the model with the same data like so:

import numpy as np
input = tf.ragged.constant([[
                            [-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], 
                            [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963]
]])
model.predict(input)

And got different results:

array([[0.4817084 , 0.3649785 , 0.01603118]], dtype=float32)

I guess I'm still not there.

https://www.tensorflow.org/tfx/serving/api_rest#predict_api

I think that you need to use a columnar format as recommended in the REST API instead of the row format because the dimensions of your 0th input do not match. This means that instead of instances you will have to use inputs. Since you also have multiple inputs, you will have to also mention that as a named input.

A sample data request could look like this

data = json.dumps({
 "inputs": {
   "args_0": [[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], 
              [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963]],
   "args_0_1": [10, 11]  #Substitute this with the correct row partition values. 
  }
})

Edit:

I read about Ragged tensors from here and it seems that the second input may be the row partitions. I couldn't find it in the documentation about what row partition style is normally used so I am using the row lengths method. Luckily TensorFlow ragged provides methods that do this for us. Use the values and row_splits properties to access them. That should work.

Others may benefit from this, as it took me a while to stitch together:

  1. Training a toy LSTM model on ragged tensors.
  2. Loading it into TensorFlow Serving.
  3. Making a prediction request with a serielized ragged tensor.

If anyone knows how to rename "args_0" and "args_0_1", please add. Relevant Git Issue: https://github.com/tensorflow/tensorflow/issues/37226

Build & Save Model

TensorFlow version: 2.9.1 Python version: 3.8.12

# Task: predict whether each sentence is a question or not.
sentences = tf.constant(
    ['What makes you think she is a witch?',
     'She turned me into a newt.',
     'A newt?',
     'Well, I got better.'])
is_question = tf.constant([True, False, True, False])

# Preprocess the input strings.
hash_buckets = 1000
words = tf.strings.split(sentences, ' ')
hashed_words = tf.strings.to_hash_bucket_fast(words, hash_buckets)


# Build the Keras model.
keras_model = tf.keras.Sequential([
    tf.keras.layers.Input(shape=[None], dtype=tf.int64, ragged=True),
    tf.keras.layers.Embedding(hash_buckets, 16),
    tf.keras.layers.LSTM(32, use_bias=False),
    tf.keras.layers.Dense(32),
    tf.keras.layers.Activation(tf.nn.relu),
    tf.keras.layers.Dense(1)
])

keras_model.compile(loss='binary_crossentropy', optimizer='rmsprop')
keras_model.fit(hashed_words, is_question, epochs=5)

print(keras_model.predict(hashed_words))

keras_module_path = "/home/ec2-user/SageMaker/keras-toy-lstm/1"
tf.keras.Model.save(keras_model, keras_module_path)

Load & Infer from Model

Load model into TensorFlow serving container

docker run -t --rm -p 8501:8501 -v "/home/ec2-user/SageMaker/keras-toy-lstm/:/models/keras-model" -e MODEL_NAME=keras-model  tensorflow/serving 
import requests
import json 

payload = {"args_0": [940, 203, 668, 638], 
            "args_0_1": [0, 4]}
headers = {"content-type": "application/json"}
data = json.dumps({"inputs":payload})

r = requests.post('http://localhost:8501/v1/models/keras-model:predict', data=data, headers=headers)
r.text

SavedModelCLI Output

(tensorflow2_p38) sh-4.2$ saved_model_cli show --dir /tmp/tmpgp0loz1v/ --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['args_0'] tensor_info:
        dtype: DT_INT64
        shape: (-1)
        name: serving_default_args_0:0
    inputs['args_0_1'] tensor_info:
        dtype: DT_INT64
        shape: (-1)
        name: serving_default_args_0_1:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_1'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict

Concrete Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64)
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #2
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64)
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None

  Function Name: '_default_save_signature'
    Option #1
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64)

  Function Name: 'call_and_return_all_conditional_losses'
    Option #1
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64)
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #2
      Callable with:
        Argument #1
          DType: RaggedTensorSpec
          Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64)
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM