简体   繁体   中英

How to deploy our ML trained model?

I am new to machine learning. I'm done with k-means clustering and the ml model is trained. My question is how to pass input for my trained model?

Example: Consider a google image processing ML model. For that we pass an image that gives the proper output like emotion from that picture.

Now my doubt is how to do like that I'm done the k-means to predict mall_customer who spending more money to buy a product for this I want to call or pass the input to the my trained model.

I am using python and sci-kit learn.

What you want here is an API where you can send request/input and get response/predictions.

You can create a Flask server, save your trained model as a pickle file and load it when making predictions. This might be some work to do.

Please refer these :

Note: The Flask inbuilt server is not production ready. You might want to refer uwsgi + ngnix

In case you are using docker : https://hub.docker.com/r/tiangolo/uwsgi-nginx-flask/ this will be a great help.

Deploying ML model is usually based on your business needs. If you have a lot of data that require prediction, and you don't need the result right away. You could do batch prediction. Typical use case for this method is recommendation. Usually it is deployed as part of a larger pipeline. There are a lot of methods to setup this pipeline, it really depends on what your company has, so I am not going too much on detail for this.

Another method is what others has mentioned, Real-time serving. Typical use case for this is fraud detection. It requires a prediction right away. It take requests via REST/gRPC/others and response back the prediction result. Depends on your latency requirements, people will use high performance environment(JAVA/C) to archive a low latency. Typically, a flask server is ok for this job for most cases.

For a flask app, you would need to create an endpoint that take in the request data and make prediction and then return the response back.

Let me know it helps you or not.

Just a self plug. We open sourced a ML toolkit for packaging and deploying. The tag line is from Jupyter notebook to production in 5 mins. It export your models and dependencies into archive you can store in local file or s3. You can import the archive as python module to predict, or use the built in rest server to make real-time prediction. You could also create docker image from generated dockerfile for production. You can find the open source project here. It called BentoML

Since the question was asked in 2019, many Python libraries exist that allow users to quickly deploy machine learning models without having to learn Flask, containerization, and getting a web hosting solution. The best solution depends on factors like how long you need to deploy the model for, and whether it needs to be able handle heavy traffic.

For the use case that the user described, it sounds like the gradio library could be helpful ( http://www.gradio.app/ ), which allows users to soft-deploy models with public links and user interfaces with a few lines of Python code, like below:

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM