简体   繁体   English

如何通过sagemaker pipeline部署抱脸model

[英]How to deploy the hugging face model via sagemaker pipeline

Below is the code to get the model from Hugging Face Hub and deploy the same model via sagemaker.下面是从 Hugging Face Hub 获取 model 并通过 sagemaker 部署相同的 model 的代码。

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

role = sagemaker.get_execution_role()

# Hub Model configuration. https://huggingface.co/models
hub = {
    'HF_MODEL_ID':'siebert/sentiment-roberta-large-english',
    'HF_TASK':'text-classification'
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
    transformers_version='4.17.0',
    pytorch_version='1.10.2',
    py_version='py38',
    env=hub,
    role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
    initial_instance_count=1, # number of instances
    instance_type='ml.g4dn.xlarge' # ec2 instance type
)

How can I deploy this model via sagemaker pipeline.如何通过 sagemaker 管道部署此 model。

How can I include this code in sagemaker pipeline.如何将此代码包含在 sagemaker 管道中。

Prerequisites先决条件

SageMaker pipelines offer many different components with lots of functionality. SageMaker 管道提供许多具有大量功能的不同组件。 Your question is quite general and needs to be contextualized to a specific problem.您的问题很笼统,需要针对特定问题进行上下文化。

You have to start with putting up a pipeline first.你必须先从建立管道开始。 See the complete guide " Defining a pipeline ".请参阅完整指南“ 定义管道”。

Quick response快速反应

My answer is to follow this official AWS guide that answers your question exactly:我的答案是遵循这份准确回答您问题的官方 AWS 指南

SageMaker Pipelines: train a Hugging Face model, deploy it with a Lambda step SageMaker Pipelines:训练 Hugging Face model,用 Lambda 步骤部署它


General explanation一般解释

Basically, you need to build your pipeline architecture with the components you need and register the trained model within the Model Registry .基本上,您需要使用所需的组件构建管道架构,并在Model Registry中注册经过训练的 model。

Next, you have two paths you can follow:接下来,您可以遵循两条路径:

  1. Trigger a lambda that automatically deploys the registered model (as the guide does). 触发 lambda自动部署已注册的 model(如指南所示)。
  2. Out of the context of the pipeline, do the automatic deployment by retrieving the ARN of the registered model on the Model Registry.在管道上下文之外,通过检索在 Model 注册表上注册的 model 的 ARN 进行自动部署。 You can get it from register_step.properties.ModelPackageArn or in an external script using boto3 (eg using list_model_packages )您可以从register_step.properties.ModelPackageArn或使用 boto3 的外部脚本中获取它(例如使用list_model_packages

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何通过自定义推理代码在 sagemaker 管道中运行批量转换作业? - how to run a batch transform job in sagemaker pipeline via custom inference code? ValueError:未找到 SavedModel 包。 尝试将 TF2.0 模型部署到 SageMaker 时 - ValueError: no SavedModel bundles found! when trying to deploy a TF2.0 model to SageMaker 如何在失败时捕获 sagemaker 错误并通过 SES、SNS 通知 - How to capture the sagemaker error in case it fails and notify via SES,SNS 如何从端点内访问 sagemaker 模型注册表指标 - How to access sagemaker model registry metrics from within the endpoint 当 Sagemaker Kernel 死了时,如何跟踪 model 的进度/状态? - How to track the model Progress/status when Sagemaker Kernel is dead? 如何让一个 AWS sagemaker 管道触发另一个? - How do you make one AWS sagemaker pipeline trigger another one? 我们如何在 AWS sagemaker 管道中编排和自动化数据移动和数据转换 - How can we orchestrate and automate data movement and data transformation in AWS sagemaker pipeline 为 PyTorch Model 调用 SageMaker 端点 - Invoking SageMaker Endpoint for PyTorch Model 如何在 Google Colab 上保存经过微调的 StableDiffusion 版本,然后部署到 Sagemaker 上? - How can I save a fine-tuned version of StableDiffusion on a Google Colab and then deploy on Sagemaker? 努力通过 amazon sagemaker 安装 python package - struggling to install python package via amazon sagemaker
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM