[英]Unable to deploy locally trained Logistic Regression model on AWS Sagemaker
I have trained a Logistic Regression model on my local machine.我在我的本地机器上训练了逻辑回归 model。 Saved the model using Joblib and tried deploying it on Aws Sagemaker using "Linear-Learner" image.使用 Joblib 保存 model 并尝试使用“线性学习器”图像将其部署在 Aws Sagemaker 上。
Facing issues while deployment as the deployment process keeps continuing and the Status is always as "Creating" and does not turn to "InService".由于部署过程继续进行并且状态始终为“正在创建”并且不会变为“服务中”,因此在部署时面临问题。
endpoint_name = "DEMO-LogisticEndpoint" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = sm_client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
while status == "Creating":
time.sleep(60)
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
The while loop keeps executing and the status never change. while 循环一直执行,状态永远不会改变。
Background: What is important to understand is that the endpoint runs a container that includes the serving software.背景:重要的是要了解端点运行一个包含服务软件的容器。 Each container expects a certain type of model. You need to make sure you're model and how you package it matches what the container expects.每个容器都需要某种类型的 model。您需要确保您是 model 以及您的 package 如何与容器的预期相匹配。
Two easy paths forward:两条简单的前进道路:
Otherwise, you can always go more advanced and use any custom algorithm by bringing your own custom algorithm/framework by bringing your own container .否则,您始终可以 go 更高级,并通过使用自己的容器来使用自己的自定义算法/框架来使用任何自定义算法。 Google for existing implementations (eg, CatBoost/SageMaker ).谷歌现有的实现(例如, CatBoost/SageMaker )。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.