简体   繁体   English

使用 Tensorflow Serving 和 SavedModel Estimators 获取模型解释

[英]Getting Model Explanations with Tensorflow Serving and SavedModel Estimators

I trained a BoostedTreesClassifier and would like to use the "directional feature contributions" as laid out in this tutorial .我训练了一个 BoostedTreesClassifier 并且想使用本教程中列出的“定向特征贡献”。 Basically it lets you "interpret" the model's prediction and measure each feature's contribution by using the experimental_predict_with_explanations method.基本上它可以让您“解释”模型的预测并通过使用experimental_predict_with_explanations方法测量每个特征的贡献。 Works great after I train the model, then call the method.在我训练模型然后调用该方法后效果很好。

But I want to export the trained estimator with the export_saved_model method.但我想用export_saved_model方法导出经过训练的估算器。 When I load the estimator back with tf.saved_model.load into a Python environment, I apparently lose that functionality because I can't call the experimental_predict_with_explanations method anymore.当我使用tf.saved_model.load将估算器加载回 Python 环境时,我显然失去了该功能,因为我无法再调用Experiment_predict_with_explanations方法。 The loaded model only has the "predict" signature.加载的模型只有“预测”签名。

Ultimately I'd like to use this trained estimator with Tensorflow Serving.最终,我想将此训练有素的估算器与 Tensorflow Serving 结合使用。 I don't suppose it's available with the "Predict" SignatureDef.我认为它不适用于“预测”SignatureDef。 Has anyone tried this before?有没有人试过这个?

Trained Estimator with Tensorflow Serving is available with the "Predict" SignatureDef .带有Tensorflow Serving Trained Estimator可与"Predict" SignatureDef

It can be achieved by using build_raw_serving_input_receiver_fn instead of build_parsing_serving_input_receiver_fn .它可以通过使用build_raw_serving_input_receiver_fn而不是build_parsing_serving_input_receiver_fn来实现。

Respective Line of Code is shown below:各行代码如下所示:

serving_input_receiver_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholders)

Complete Code for Classification Model with Predict SignatureDef is shown below:带有 Predict SignatureDef Classification Model完整代码如下所示:

import tensorflow as tf
import iris_data


BATCH_SIZE = 100
STEPS = 1000
Export_Dir = 'Premade_Estimator_Export_Raw' #No need of Version Number

(train_x, train_y), (test_x, test_y) = iris_data.load_data()
type(train_x.values[0][0])

# Feature columns describe how to use the input.
my_feature_columns = []
for key in train_x.keys():
    my_feature_columns.append(tf.feature_column.numeric_column(key=key))

print(my_feature_columns)

columns = [('SepalLength', tf.float32), ('SepalWidth', tf.float32),
           ('PetalLength', tf.float32), ('PetalWidth', tf.float32)]


feature_placeholders = {name: tf.placeholder(dtype, [1], name=name + "_placeholder") for name, dtype in columns}

print(feature_placeholders)

print(type(train_x))

# Build a DNN with 2 hidden layers and 10 nodes in each hidden layer.
classifier = tf.estimator.DNNClassifier(feature_columns=my_feature_columns,    
                                        hidden_units=[10, 10],  # Two hidden layers of 10 nodes each.
                                            n_classes=3) # The model must choose between 3 classes.    

# Train the Model.
classifier.train(input_fn=lambda:iris_data.train_input_fn(train_x, train_y, BATCH_SIZE),steps=STEPS)

eval_result = classifier.evaluate(input_fn=lambda:iris_data.eval_input_fn(test_x, test_y, BATCH_SIZE))

print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))

# Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
    'SepalLength': [5.1, 5.9, 6.9],
    'SepalWidth': [3.3, 3.0, 3.1],
    'PetalLength': [1.7, 4.2, 5.4],
    'PetalWidth': [0.5, 1.5, 2.1],
}

predictions = classifier.predict(input_fn=lambda:iris_data.eval_input_fn(features = predict_x, labels = None, 
                                            batch_size=BATCH_SIZE))

template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"')

for pred_dict, expec in zip(predictions, expected):
    class_id = pred_dict['class_ids'][0]
    probability = pred_dict['probabilities'][class_id]

    print(template.format(iris_data.SPECIES[class_id],100 * probability, expec))

# This is the Important Step
serving_input_receiver_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholders)
export_dir = classifier.export_saved_model(Export_Dir, serving_input_receiver_fn)
print('Exported to {}'.format(export_dir))

The SignatureDef of the above Model is shown below:上述Model的SignatureDef如下图所示:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['predict']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['PetalLength'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1)
        name: PetalLength_placeholder:0
    inputs['PetalWidth'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1)
        name: PetalWidth_placeholder:0
    inputs['SepalLength'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1)
        name: SepalLength_placeholder:0
    inputs['SepalWidth'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1)
        name: SepalWidth_placeholder:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['all_class_ids'] tensor_info:
        dtype: DT_INT32
        shape: (-1, 3)
        name: dnn/head/predictions/Tile:0
    outputs['all_classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 3)
        name: dnn/head/predictions/Tile_1:0
    outputs['class_ids'] tensor_info:
        dtype: DT_INT64
        shape: (-1, 1)
        name: dnn/head/predictions/ExpandDims_2:0
    outputs['classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 1)
        name: dnn/head/predictions/str_classes:0
    outputs['logits'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 3)
        name: dnn/logits/BiasAdd:0
    outputs['probabilities'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 3)
        name: dnn/head/predictions/probabilities:0
  Method name is: tensorflow/serving/predict

Inference can be performed using the below commands:可以使用以下命令进行Inference

sudo docker pull tensorflow/serving

sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/Jupyter_Notebooks/TF_Serving/Serving_Made_Easy/Serving_Demystified/Premade_Estimator_Export_Raw,target=/models/Premade_Estimator_Export_Raw -e MODEL_NAME=Premade_Estimator_Export_Raw -t tensorflow/serving &

curl -d '{"signature_name":"predict","instances": [{"SepalLength":[5.1],"SepalWidth":[3.3],"PetalLength":[1.7],"PetalWidth":[0.5]}]}'
-X POST http://localhost:8501/v1/models/Premade_Estimator_Export_Raw:predict

Output is shown below: Output如下所示:

{"predictions": [{ "all_classes": ["0", "1", "2"], "probabilities": [0.996251881, 0.00374808488, 3.86118275e-15], "logits": [14.2761269, 8.69337177, -18.9079208], "class_ids": [0], "classes": ["0"], "all_class_ids": [0, 1, 2]}]}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在不使用 Estimator 的情况下为 Tensorflow model 编写服务输入 function? - How to write serving input function for Tensorflow model trained without using Estimators? TF2:为张量流服务添加预处理到预训练保存模型(扩展保存模型的图形) - TF2: Add preprocessing to pretrained saved model for tensorflow serving (Extending the graph of a savedModel) 使用 Tensorflow Serving 为 Keras 模型提供服务 - Serving a Keras model with Tensorflow Serving DNNClassifier 模型到 TensorFlow Serving 模型 - DNNClassifier model to TensorFlow Serving model 用文件训练的服务张量流模型 - Serving tensorflow model trained with files 从 SavedModel 格式恢复的模型输入形状的 Tensorflow 不匹配 - Tensorflow mismatch of input shape from model restored from SavedModel format 将模型另存为 H5 或 SavedModel 时出现 TensorFlow Hub 错误 - TensorFlow Hub error when Saving model as H5 or SavedModel 将 Tensorflow Keras model(编码器 - 解码器)保存为 SavedModel 格式 - Saving a Tensorflow Keras model (Encoder - Decoder) to SavedModel format 无法将 TensorFlow Keras LSTM model 保存为 SavedModel 格式 - Unable to save TensorFlow Keras LSTM model to SavedModel format 在TensorFlow中保存自定义估算器 - Saving custom estimators in TensorFlow
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM