简体   繁体   English

使用 Tensorflow Serving 为 Keras 模型提供服务

[英]Serving a Keras model with Tensorflow Serving

Tensorflow 1.12 release notes states that: "Keras models can now be directly exported to the SavedModel format(tf.contrib.saved_model.save_keras_model()) and used with Tensorflow Serving" . Tensorflow 1.12 发行说明指出: “Keras 模型现在可以直接导出为 SavedModel 格式(tf.contrib.saved_model.save_keras_model())并与 Tensorflow Serving 一起使用” So I gave it a shot -所以我试了一下——

I have exported a simple model with this op using a single line.我使用单行导出了带有此操作的简单模型。 However, Tensorflow serving doesn't recognize the model.但是,Tensorflow 服务无法识别模型。 I guess the problem is with the docker call, and maybe with a missing 'signature_defs' in the model definition.我想问题出在 docker 调用上,可能是模型定义中缺少“signature_defs”。 I would be thankful for info regarding the missing steps.我将感谢有关缺少步骤的信息。

1. Training and exporting the model to TF serving : 1.训练模型并导出到TF服务

Here is the code based on Jason Brownlee's first NN (chosen thanks to its simplicity)下面是基于 Jason Brownlee 的第一个 NN的代码(由于其简单而选择)

(the training data, as a short CSV file, is here ): (训练数据,作为一个简短的 CSV 文件,在这里):

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.contrib.saved_model import save_keras_model
import numpy

# fix random seed for reproducibility
numpy.random.seed(7)

# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10)

# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)

# Save the model for serving
path = '/TensorFlow_Models/Keras_serving/saved_model' # full path of where to save the model
save_keras_model(model, path)

2. Setting up Tensorflow Server : 2. 设置 Tensorflow 服务器

The server can be set via docker or by its own build.服务器可以通过 docker 或通过它自己的构建来设置。 TF recommends docker ( TF ref ). TF 推荐 docker ( TF ref )。 Following this, and based on TF blog and TF Serving Tutorial :在此之后,并基于TF 博客TF 服务教程

  1. Install Docker (from here )安装 Docker(从这里
  2. Get the latest TF serving version:获取最新的 TF 服务版本:

docker pull tensorflow/serving码头工人拉张量流/服务

  1. Activate TF serving with this model ( TF ref ):使用此模型激活 TF 服务 ( TF ref ):

docker run -p 8501:8501 --name NNN --mount type=bind,source=SSS,target=TTT -e MODEL_NAME=MMM -t tensorflow/serving & docker run -p 8501:8501 --name NNN --mount type=bind,source=SSS,target=TTT -e MODEL_NAME=MMM -t tensorflow/serving &

I would be happy if one could confirm:如果有人可以确认,我会很高兴:

  • NNN - the docker container name - which is used, for instance, to kill the process. NNN - docker 容器名称 - 例如,用于终止进程。 It can be set arbitrarily (eg to: mydocker).它可以任意设置(例如:mydocker)。
  • MMM - the name of the model, which seem to be set arbitrarily. MMM - 模型的名称,似乎是任意设置的。
  • SSS - the folder where the model is located, full path. SSS - 模型所在的文件夹,完整路径。
  • TTT - What should be this set to ? TTT - 这应该设置为什么?

3. the client 3.客户

The server can get requests either over gRPC or RESTful API.服务器可以通过 gRPC 或 RESTful API 获取请求。 Assuming we go with RESTful API, the model can be accessed by using curl ( here is a TF example ).假设我们使用 RESTful API,可以使用 curl 访问模型(这是一个 TF 示例)。 But how do we set the input/output of the model?但是我们如何设置模型的输入/输出呢? does SignatureDefs needed ( ref )?是否需要 SignatureDefs ( ref )?

All in all , while "Keras models can now be directly exported to the SavedModel format(tf.contrib.saved_model.save_keras_model()) and used with Tensorflow Serving" , as stated in TF1.12 release notes, there is a way to go in order to actually serve the model.总而言之,虽然“Keras 模型现在可以直接导出为 SavedModel 格式(tf.contrib.saved_model.save_keras_model()) 并与 Tensorflow Serving 一起使用” ,如 TF1.12 发行说明中所述,但还有一段路要走为了实际服务于模型。 I would be happy for ideas on completing this.我很高兴有关于完成这个的想法。

You are all correct about NNN and SSS.你对 NNN 和 SSS 都是正确的。 NNN can be arbitrary, if not specified, docker will give it a random name. NNN 可以是任意的,如果不指定,docker 会随机给它一个名字。

For MMM, better give it a meaningful name.对于 MMM,最好给它一个有意义的名字。

For TTT this is general about docker run command, and you can refer docker doc .对于 TTT,这是关于docker run命令的一般信息,您可以参考docker doc This is where you map(bind) SSS inside the container, usually set to /models/$MODEL_NAME .这是您在容器内映射(绑定)SSS 的位置,通常设置为/models/$MODEL_NAME If you get into this container and open /models/$MODEL_NAME , you will see the version folder(s) just as in SSS.如果您进入此容器并打开/models/$MODEL_NAME ,您将看到版本文件夹,就像在 SSS 中一样。

The input of RESTful API is the same as the input to the model in TensorFlow code, in your example is X = dataset[:,0:8] . RESTful API 的输入与 TensorFlow 代码中模型的输入相同,在您的示例中为X = dataset[:,0:8]

If you didn't define signature when saving the model like the example in doc , then it's not necessary in serving.如果您在保存模型时没有像doc 中的示例那样定义签名,则在服务中没有必要。

If you want to create an API endpoint server with Keras out of the box.如果您想使用 Keras 开箱即用地创建 API 端点服务器。 I recommend you to check out BentoML( https://github.com/bentoml/bentoml ).我建议您查看 BentoML( https://github.com/bentoml/bentoml )。

It is an open source python library to make it easy to serve and deploy machine learning models in the cloud.它是一个开源 Python 库,可以轻松地在云中提供和部署机器学习模型。 disclosure: I am one of the creator of BentoML .披露:我是 BentoML 的创建者之一

Here is an exammple of Keras model using BentoML ( https://github.com/bentoml/gallery/blob/master/keras/fashion-mnist/keras-fashion-mnist.ipynb )这是使用 BentoML 的 Keras 模型示例( https://github.com/bentoml/gallery/blob/master/keras/fashion-mnist/keras-fashion-mnist.ipynb

With few line of codes, you could package your model, preprocessing code and dependencies into a standardized format that could distribute to different formats (Docker, CLI tool, Spark UDF, and etc) and deploy them to different cloud platforms(Sagemaker, Lambda, Kubernetes)只需几行代码,您就可以将模型、预处理代码和依赖项打包成标准化格式,可以分发到不同格式(Docker、CLI 工具、Spark UDF 等)并将它们部署到不同的云平台(Sagemaker、Lambda、 Kubernetes)

After you finished training your model in Jupyter notebook.在 Jupyter Notebook 中完成模型训练后。 You can add BentoService spec to create your model in standardized bundle format您可以添加 BentoService 规范以标准化捆绑格式创建模型

%%writefile my_model.py

from bentoml import api, artifacts, env, BentoService
from bentoml.artifact import KerasModelArtifact
from bentoml.handlers import DataframeHandler

@env(pip_dependencies=['keras', 'tensorflow==1.14.0'])
@artifacts([KerasModelArtifact('model')])
class KerasModelService(BentoService):

    @api(DataframHandler)
    def predict(self, df):
        return self.artifacts.model.predict(df)

In the next cell you could pack your model and save it.在下一个单元格中,您可以打包模型并保存它。

# 1) import the custom BentoService defined above
from my_model import KerasModelService

# 2) `pack` it with required artifacts
keras_svc = KerasModelService.pack(model=model)

# 3) save your BentoSerivce to file archive
saved_path = keras_svc.save()

These will packaged your model, preprocessing code, dependencies and also generate Dockerfile and other for you.这些将打包您的模型、预处理代码、依赖项,并为您生成 Dockerfile 和其他文件。

With this bundle, you could easily start a docker container with REST API endpoints.使用此包,您可以轻松启动带有 REST API 端点的 docker 容器。

cd saved_path && docker build -t keras-model .

Feel free to ping me, if you have any questions.如果您有任何问题,请随时 ping 我。

Thanks for your question, more or less is linked with mine tensorflow-serving signature for an XOR感谢您的问题,或多或少与我的 XOR tensorflow 服务签名相关联

I add exactly your doubts about the TTT我准确地补充了您对 TTT 的疑虑

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM