简体   繁体   English

无法在 AWS Sagemaker 上部署本地训练的逻辑回归 model

[英]Unable to deploy locally trained Logistic Regression model on AWS Sagemaker

I have trained a Logistic Regression model on my local machine.我在我的本地机器上训练了逻辑回归 model。 Saved the model using Joblib and tried deploying it on Aws Sagemaker using "Linear-Learner" image.使用 Joblib 保存 model 并尝试使用“线性学习器”图像将其部署在 Aws Sagemaker 上。

Facing issues while deployment as the deployment process keeps continuing and the Status is always as "Creating" and does not turn to "InService".由于部署过程继续进行并且状态始终为“正在创建”并且不会变为“服务中”,因此在部署时面临问题。

endpoint_name = "DEMO-LogisticEndpoint" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = sm_client.create_endpoint(
    EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])

resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)

while status == "Creating":
    time.sleep(60)
    resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
    status = resp["EndpointStatus"]
    print("Status: " + status)

The while loop keeps executing and the status never change. while 循环一直执行,状态永远不会改变。

Background: What is important to understand is that the endpoint runs a container that includes the serving software.背景:重要的是要了解端点运行一个包含服务软件的容器。 Each container expects a certain type of model. You need to make sure you're model and how you package it matches what the container expects.每个容器都需要某种类型的 model。您需要确保您是 model 以及您的 package 如何与容器的预期相匹配。

Two easy paths forward:两条简单的前进道路:

  1. Linear-learner is a SageMaker built-in algorithm, so a straight forward path would be to train it in the cloud.线性学习器是一种 SageMaker 内置算法,因此直接的路径是在云中对其进行训练。 See example , making it very easy to deploy.请参阅示例,使其非常易于部署。
  2. Use Scikit-learn Logistic Regression] 2 , which you can train locally and deploy to SageMaker using the scikit-learn container ( XGBoost is another easy path).使用 Scikit-learn Logistic Regression] 2 ,您可以在本地训练它并使用 scikit-learn 容器XGBoost是另一个简单的路径)部署到 SageMaker。

Otherwise, you can always go more advanced and use any custom algorithm by bringing your own custom algorithm/framework by bringing your own container .否则,您始终可以 go 更高级,并通过使用自己的容器来使用自己的自定义算法/框架来使用任何自定义算法。 Google for existing implementations (eg, CatBoost/SageMaker ).谷歌现有的实现(例如, CatBoost/SageMaker )。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 AWS sagemaker 上部署预训练的 sklearn model? (端点停留在创建) - How do I deploy a pre trained sklearn model on AWS sagemaker? (Endpoint stuck on creating) 部署 TensorFlow 概率回归 model 作为 Sagemaker 端点 - Deploy TensorFlow probability regression model as Sagemaker endpoint 如何将重新训练的 Sagemaker model 部署到端点? - How can I deploy a re-trained Sagemaker model to an endpoint? 如何使用新训练的 Model 更新 Sagemaker Endpoint? - How to update Sagemaker Endpoint with the newly Trained Model? 在 AWS SageMaker 中使用预处理和后处理创建和部署预训练 tensorflow model - Creating and deploying pre-trained tensorflow model with pre-processing and post-processing in AWS SageMaker 在 AWS SageMaker 中部署 R 应用程序 RStudio - Deploy R application in AWS SageMaker RStudio 无法打开 AWS Sagemaker Studio - Unable to open AWS Sagemaker Studio 在 Sagemaker 和 Huggingface 中训练一个已经训练过的 model 而无需重新初始化 - Train an already trained model in Sagemaker and Huggingface without re-initialising AWS Sagemaker 无法解析增强的清单文件 - AWS Sagemaker Unable to Parse Augmented Manifest File SageMaker Model AWS App Runner 编译器 - SageMaker Model Compiler for AWS App Runner
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM