简体   繁体   English

使用Tensorflow服务的OpenFaaS服务模型

[英]OpenFaaS serve model using Tensorflow serving

I'd like to serve Tensorfow Model by using OpenFaaS. 我想使用OpenFaaS服务Tensorfow模型。 Basically, I'd like to invoke the "serve" function in such a way that tensorflow serving is going to expose my model. 基本上,我想以某种方式调用“服务”功能,以便tensorflow serving将公开我的模型。

OpenFaaS is running correctly on Kubernetes and I am able to invoke functions via curl or from the UI . OpenFaaS在Kubernetes上正常运行,我可以通过curl或从UI调用函数。

I used the incubator-flask as example, but I keep receiving 502 Bad Gateway all the time. 以保温瓶为例,但是我一直都在收到502 Bad Gateway

The OpenFaaS project looks like the following OpenFaaS项目如下所示

serve/
  - Dockerfile
stack.yaml

The inner Dockerfile is the following 内部Dockerfile如下

FROM tensorflow/serving

RUN mkdir -p /home/app

RUN apt-get update \
    && apt-get install curl -yy

RUN echo "Pulling watchdog binary from Github." \
    && curl -sSLf https://github.com/openfaas-incubator/of-watchdog/releases/download/0.4.6/of-watchdog > /usr/bin/fwatchdog \
    && chmod +x /usr/bin/fwatchdog

WORKDIR /root/

# remove unecessery logs from S3
ENV TF_CPP_MIN_LOG_LEVEL=3

ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
ENV AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
ENV AWS_REGION=${AWS_REGION}
ENV S3_ENDPOINT=${S3_ENDPOINT} 

ENV fprocess="tensorflow_model_server --rest_api_port=8501 \
    --model_name=${MODEL_NAME} \
    --model_base_path=${MODEL_BASE_PATH}"

# Set to true to see request in function logs
ENV write_debug="true"
ENV cgi_headers="true"
ENV mode="http"
ENV upstream_url="http://127.0.0.1:8501"

# gRPC tensorflow serving
# EXPOSE 8500

# REST tensorflow serving
# EXPOSE 8501

RUN touch /tmp/.lock
HEALTHCHECK --interval=5s CMD [ -e /tmp/.lock ] || exit 1

CMD [ "fwatchdog" ]

the stack.yaml file looks like the following stack.yaml文件如下所示

provider:
  name: faas
  gateway: https://gateway-url:8080

functions:
  serve:
    lang: dockerfile
    handler: ./serve
    image: repo/serve-model:latest
    imagePullPolicy: always

I build the image with faas-cli build -f stack.yaml and then I push it to my docker registry with faas-cli push -f stack.yaml . 我使用faas-cli build -f stack.yaml ,然后使用faas-cli push -f stack.yaml将其推送到我的faas-cli push -f stack.yaml注册表。

When I execute faas-cli deploy -f stack.yaml -e AWS_ACCESS_KEY_ID=... I get Accepted 202 and it appears correctly among my functions. 当我执行faas-cli deploy -f stack.yaml -e AWS_ACCESS_KEY_ID=...我被Accepted 202 ,它在我的函数中正确显示。 Now, I want to invoke the tensorflow serving on the model I specified in my ENV . 现在,我想在我的ENV指定的模型上调用张量tensorflow serving

The way I try to make it work is to use curl in this way 我尝试使其工作的方法是以这种方式使用curl

curl -d '{"inputs": ["1.0", "2.0", "5.0"]}' -X POST https://gateway-url:8080/function/deploy-model/v1/models/mnist:predict

but I always obtain 502 Bad Gateway . 但是我总是得到502 Bad Gateway

Does anybody have experience with OpenFaaS and Tensorflow Serving? 是否有人有使用OpenFaaS和Tensorflow服务的经验? Thanks in advance 提前致谢

PS 聚苯乙烯

If I run tensorflow serving without of-watchdog (basically without the openfaas stuff), the model is served correctly. 如果我在没有of-watchdog情况of-watchdog运行tensorflow serving (基本上没有openfaas的东西),则模型可以正确提供。

Elaborating the link mentioned by @viveksyngh. 详细说明@viveksyngh提到的链接。

tensorflow-serving-openfaas: tensorflow-serving-openfaas:

Example of packaging TensorFlow Serving with OpenFaaS to be deployed and managed through OpenFaaS with auto-scaling, scale-from-zero and a sane configuration for Kubernetes. 打包TensorFlow与OpenFaaS一起使用的示例,可通过针对Kubernetes的自动缩放,从零缩放和合理配置的OpenFaaS进行部署和管理。

This example was adapted from: https://www.tensorflow.org/serving 该示例改编自: https : //www.tensorflow.org/serving

Pre-reqs: 先决条件:

OpenFaaS OpenFaaS

OpenFaaS CLI OpenFaaS CLI

Docker 码头工人

Instructions: 说明:

Clone the repo 克隆仓库

$ mkdir -p ~/dev/

$ cd ~/dev/

$ git clone https://github.com/alexellis/tensorflow-serving-openfaas

Clone the sample model and copy it to the function's build context 克隆示例模型并将其复制到函数的构建上下文中

$ cd ~/dev/tensorflow-serving-openfaas

$ git clone https://github.com/tensorflow/serving

$ cp -r serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu ./ts-serve/saved_model_half_plus_two_cpu

Edit the Docker Hub username 编辑Docker Hub用户名

You need to edit the stack.yml file and replace alexellis2 with your Docker Hub account. 您需要编辑stack.yml文件,并用您的Docker Hub帐户替换alexellis2。

Build the function image 建立功能图

$  faas-cli build

You should now have a Docker image in your local library which you can deploy to a cluster with faas-cli up 现在,您应该在本地库中有一个Docker映像,可以通过faas-cli up将其部署到集群中

Test the function locally 本地测试功能

All OpenFaaS images can be run stand-alone without OpenFaaS installed, let's do a quick test, but replace alexellis2 with your own name. 所有OpenFaaS映像都可以在不安装OpenFaaS的情况下独立运行,让我们进行快速测试,但用您自己的名称替换alexellis2。

$ docker run -p 8081:8080 -ti alexellis2/ts-serve:latest

Now in another terminal: 现在在另一个终端中:

$ curl -d '{"instances": [1.0, 2.0, 5.0]}' \
   -X POST http://127.0.0.1:8081/v1/models/half_plus_two:predict

{
    "predictions": [2.5, 3.0, 4.5
    ]
}

From here you can run faas-cli up and then invoke your function from the OpenFaaS UI, CLI or REST API.

$ export OPENFAAS_URL=http://127.0.0.1:8080

$ curl -d '{"instances": [1.0, 2.0, 5.0]}' $OPENFAAS_URL/function/ts-serve/v1/models/half_plus_two:predict

{
    "predictions": [2.5, 3.0, 4.5
    ]
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM