简体   繁体   English

在哪里部署机器学习模型以进行 API 预测?

[英]Where to deploy machine learning model for API predictions?

I created a machine learning model with Prophet :我用Prophet创建了一个机器学习模型:

https://www.kaggle.com/marcmetz/ticket-sales-prediction-facebook-prophet https://www.kaggle.com/marcmetz/ticket-sales-prediction-facebook-prophet

I have a web application running with Django.我有一个与 Django 一起运行的 Web 应用程序。 From that application, I want to be able to lookup predictions from the model I created.在该应用程序中,我希望能够从我创建的模型中查找预测。 I assume the best way to do is to deploy my model on Google Cloud Platform or AWS (?) and access forecasts through API calls from my web application to one of these services.我认为最好的方法是将我的模型部署在 Google Cloud Platform 或 AWS (?) 上,并通过从我的 Web 应用程序到这些服务之一的 API 调用访问预测。

My question now: Is that way I described it the right way to do so?我现在的问题是:我这样描述它是正确的方法吗? I still struggle to decide if either AWS or Google Cloud is the better solution for my case, especially with Prophet.我仍然很难决定 AWS 或 Google Cloud 是否更适合我的情况,尤其是对于 Prophet。 I could only find examples with scikit-learn .我只能找到scikit-learn例子。 Any of you who has experience with that and can point me in the right direction?你们中有人有这方面的经验并且可以指出我正确的方向吗?

It really depends on the type of model that you are using.这实际上取决于您使用的模型类型。 In many cases, the model inference is getting a data point (similar to the data points you trained it with) and the model will generate a prediction to that requested data point.在许多情况下,模型推理正在获取一个数据点(类似于您训练它的数据点),并且模型将生成对该请求数据点的预测。 In such cases, you need to host the model somewhere in the cloud or on the edge.在这种情况下,您需要将模型托管在云端或边缘的某处。

However, Prophet is often generating the predictions for the future as part of the training of the model.但是,作为模型训练的一部分,Prophet 通常会生成对未来的预测。 In this case, you only need to serve the predictions that were already calculated, and you can serve them as a CSV file from S3, or as lookup values from a DynamoDB or other lookup data stores.在这种情况下,您只需提供已经计算过的预测,您可以将它们作为来自 S3 的 CSV 文件提供,或者作为来自 DynamoDB 或其他查找数据存储的查找值提供。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在EC2上部署机器学习模型并连接到没有Lambda的API网关? - How to deploy a machine learning model on EC2 and connect to API Gateway without Lambda? AWS机器学习报复模型 - AWS Machine Learning Retrain Model 在预处理输入后,如何使用腌制模型通过AWS-Sagemaker部署我的机器学习模型 - How do I use a pickled model to deploy my Machine Learning model using AWS-Sagemaker after preprocessing the input Amazon Machine Learning Studio:进行实时预测时不应用调整分数阈值 - Amazon Machine Learning Studio: Adjusted Score threshold is not applied while making real time predictions 更新用于亚马逊机器学习ML模型的数据源 - update datasource used for Amazon machine learning ML model 如何使用AWS Java SDK进行机器学习来了解模型类型 - How to Know Model Type using AWS Java SDK for Machine Learning 如何在 AWS 上的 memory 中保留机器学习 model? - How do I keep a machine learning model in memory on AWS? JAVA AWS Machine Learning API启用实时预测 - JAVA AWS Machine Learning API to enable Realtime prediction Python 机器学习 API - Microsoft Azure / AWS Lambda - Python machine learning API - Microsoft Azure / AWS Lambda 腌制 python 机器学习 model 使用硬编码路径,不在其他机器上运行 - 怎么办? - pickled python machine learning model uses hardcoded paths, doesn't run on other machine - what to do?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM