简体   繁体   English

在不更新配置的情况下将新的 model 部署到 sagemaker 端点?

[英]Deploying a new model to a sagemaker endpoint without updating the config?

I want to deploy a new model to an existing AWS SageMaker endpoint.我想将新的 model 部署到现有的 AWS SageMaker 终端节点。 The model is trained by a different pipeline and stored as a mode.tar.gz in S3. model 由不同的管道训练并在 S3 中存储为 mode.tar.gz。 The sagemaker endpoint config is pointing to this as the model data URL. Sagemaker however doesn't reload the model and I don't know how to convince it to do so. sagemaker 端点配置指向此作为 model 数据 URL。然而,Sagemaker 不会重新加载 model,我不知道如何说服它这样做。

I want to deploy a new model to an AWS SageMaker endpoint.我想将新的 model 部署到 AWS SageMaker 终端节点。 The model is trained by a different pipeline and stored as a mode.tar.gz in S3. model 由不同的管道训练并在 S3 中存储为 mode.tar.gz。 I provisioned the Sagemaker Endpoint using AWS CDK.我使用 AWS CDK 配置了 Sagemaker Endpoint。 Now, within the training pipeline, I want to allow the data scientists to optionally upload their newly trained model to the endpoint for testing.现在,在训练管道中,我想让数据科学家有选择地将他们新训练的 model 上传到端点进行测试。 I dont want to create a new model or an endpoint config.我不想创建新的 model 或端点配置。 Also, I dont want to change the infrastructure (AWS CDK) code.另外,我不想更改基础设施 (AWS CDK) 代码。

The model is uploaded to the S3 location that the sagemaker endpoint config is using as the model_data_url . model 被上传到 sagemaker 端点配置用作model_data_url的 S3 位置。 Hence it should use the new model. But it doesn't load it.因此它应该使用新的 model。但它不会加载它。 I know that Sagemaker caches models inside the container, but idk how to force a new load.我知道 Sagemaker 在容器内缓存模型,但不知道如何强制新加载。

This documentation suggests to store the model tarball with another name in the same S3 folder, and alter the code to invoke the model. This is not possible for my application. 本文档建议将 model tarball 存储在同一个 S3 文件夹中,并更改代码以调用 model。这对我的应用程序来说是不可能的。 And I dont want Sagemaker to default to an old model, once the TargetModel parameter is not present.一旦TargetModel参数不存在,我不希望 Sagemaker 默认为旧的 model。

Here is what I am currently doing after uploading the model to S3.这是将 model 上传到 S3 后我目前正在做的事情。 Even though the endpoint transitions into Updating state, it does not force a model reload:即使端点转换为更新state,它也不会强制重新加载 model:


def update_sm_endpoint(endpoint_name: str) -> Dict[str, Any]:
    """Forces the sagemaker endpoint to reload model from s3"""
    sm = boto3.client("sagemaker")
    return sm.update_endpoint_weights_and_capacities(
        EndpointName=endpoint_name,
        DesiredWeightsAndCapacities=[
            {"VariantName": "main", "DesiredWeight": 1},
        ],
    )

Any ideas?有任何想法吗?

If you want to modify the model called in a SageMaker endpoint, you have to create a new model object and and new endpoint configuration.如果要修改在 SageMaker 端点中调用的 model,则必须创建新的 model object 和新的端点配置。 Then call update_endpoint This will not change the name of the endpoint.然后调用update_endpoint这不会更改端点的名称。

comments on your question and SageMaker doc:对您的问题和 SageMaker 文档的评论:

  • the documentation you mention ( "This documentation suggests to store the model tarball with another name in the same S3 folder, and alter the code to invoke the model" ) is for SageMaker Multi-Model Endpoint, a service to store multiple models in the same endpoint in parallel.您提到的文档( “本文档建议将 model tarball 存储在同一个 S3 文件夹中,并更改代码以调用模型” )适用于 SageMaker 多模型端点,这是一种将多个模型存储在同一文件夹中的服务端点并行。 This is not what you need.这不是你需要的。 You need a single-model SageMaker endpoint, and that you update with a您需要一个单模型 SageMaker 端点,并且您使用

  • also, the API you mention sm.update_endpoint_weights_and_capacities is not needed for what you want (unless you want a progressive rollout from the traffic from model 1 to model 2).此外,您提到的sm.update_endpoint_weights_and_capacities不需要您想要的(除非您希望从 model 1 到 model 2 的流量逐步推出)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 为 PyTorch Model 调用 SageMaker 端点 - Invoking SageMaker Endpoint for PyTorch Model 如何在没有 lambda 的情况下访问/调用 sagemaker 端点? - How to access/invoke a sagemaker endpoint without lambda? AWS SageMaker Pipeline Model 端点部署失败 - AWS SageMaker Pipeline Model endpoint deployment failing 部署 TensorFlow 概率回归 model 作为 Sagemaker 端点 - Deploy TensorFlow probability regression model as Sagemaker endpoint 如何使用新训练的 Model 更新 Sagemaker Endpoint? - How to update Sagemaker Endpoint with the newly Trained Model? 在 SageMaker 中检索给定 Model Package 组的 model 端点? - Retrieve model endpoint given a Model Package Group in SageMaker? 将预训练的 Tensorflow 模型部署到 sagemaker 中的一个端点(一个端点的多模型)时出错? - Error when deploying pre trained Tensorflow models to one endpoint (multimodel for one endpoint) in sagemaker? 如何从端点内访问 sagemaker 模型注册表指标 - How to access sagemaker model registry metrics from within the endpoint 如何将重新训练的 Sagemaker model 部署到端点? - How can I deploy a re-trained Sagemaker model to an endpoint? 多模型端点的无服务推理 - Amazon Sagemaker - Serveless inference over multi-model endpoint - Amazon Sagemaker
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM