简体   繁体   English

AWS SageMaker - 如何加载经过训练的 sklearn model 以进行推理?

[英]AWS SageMaker - How to load trained sklearn model to serve for inference?

I am trying to deploy a model trained with sklearn to an endpoint and serve it as an API for predictions.我正在尝试将使用 sklearn 训练的 model 部署到端点,并将其作为 API 进行预测。 All I want to use sagemaker for, is to deploy and server model I had serialised using joblib , nothing more.我只想使用 sagemaker 来部署和服务器 model 我已经使用joblib序列化,仅此而已。 every blog I have read and sagemaker python documentation showed that sklearn model had to be trained on sagemaker in order to be deployed in sagemaker.我读过的每个博客和 sagemaker python 文档都显示 sklearn model 必须在 sagemaker 上进行培训才能部署在 sagemaker 中。

When I was going through the SageMaker documentation I learned that sagemaker does let users load a serialised model stored in S3 as shown below:当我浏览 SageMaker 文档时,我了解到 sagemaker 确实允许用户加载存储在 S3 中的序列化 model ,如下所示:

def model_fn(model_dir):
    clf = joblib.load(os.path.join(model_dir, "model.joblib"))
    return clf

And this is what documentation says about the argument model_dir :这就是文档中关于参数model_dir的说明:

SageMaker will inject the directory where your model files and sub-directories, saved by save, have been mounted. SageMaker 将注入您的 model 文件和子目录(通过 save 保存)已安装的目录。 Your model function should return a model object that can be used for model serving. Your model function should return a model object that can be used for model serving.

This again means that training has to be done on sagemaker.这再次意味着必须在 sagemaker 上进行培训。

So, is there a way I can just specify the S3 location of my serialised model and have sagemaker de-serialise(or load) the model from S3 and use it for inference?那么,有没有办法我可以指定序列化 model 的 S3 位置,并让 sagemaker 从 S3 反序列化(或加载)model 并将其用于推理?

EDIT 1:编辑1:

I used code in the answer to my application and I got below error when trying to deploy from notebook of SageMaker studio.我在应用程序的答案中使用了代码,但在尝试从 SageMaker Studio 的笔记本部署时出现以下错误。 I believe SageMaker is screaming that training wasn't done on SageMaker.我相信 SageMaker 正在尖叫说没有在 SageMaker 上进行培训。

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-4-6662bbae6010> in <module>
      1 predictor = model.deploy(
      2     initial_instance_count=1,
----> 3     instance_type='ml.m4.xlarge'
      4 )

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in deploy(self, initial_instance_count, instance_type, serializer, deserializer, accelerator_type, endpoint_name, use_compiled_model, wait, model_name, kms_key, data_capture_config, tags, **kwargs)
    770         """
    771         removed_kwargs("update_endpoint", kwargs)
--> 772         self._ensure_latest_training_job()
    773         self._ensure_base_job_name()
    774         default_name = name_from_base(self.base_job_name)

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in _ensure_latest_training_job(self, error_message)
   1128         """
   1129         if self.latest_training_job is None:
-> 1130             raise ValueError(error_message)
   1131 
   1132     delete_endpoint = removed_function("delete_endpoint")

ValueError: Estimator is not associated with a training job

My code:我的代码:

import sagemaker
from sagemaker import get_execution_role
# from sagemaker.pytorch import PyTorchModel
from sagemaker.sklearn import SKLearn
from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer

sm_role = sagemaker.get_execution_role()  # IAM role to run SageMaker, access S3 and ECR

model_file = "s3://sagemaker-manual-bucket/sm_model_artifacts/model.tar.gz"   # Must be ".tar.gz" suffix

class AnalysisClass(RealTimePredictor):
    def __init__(self, endpoint_name, sagemaker_session):
        super().__init__(
            endpoint_name,
            sagemaker_session=sagemaker_session,
            serializer=json_serializer,
            deserializer=json_deserializer,   # To be able to use JSON serialization
            content_type='application/json'   # To be able to send JSON as HTTP body
        )

model = SKLearn(model_data=model_file,
                entry_point='inference.py',
                name='rf_try_1',
                role=sm_role,
                source_dir='code',
                framework_version='0.20.0',
                instance_count=1,
                instance_type='ml.m4.xlarge',
                predictor_cls=AnalysisClass)
predictor = model.deploy(initial_instance_count=1,
                         instance_type='ml.m4.xlarge')

Yes you can.是的你可以。 AWS documentation focuses on end-to-end from training to deployment in SageMaker which makes the impression that training has to be done on sagemaker. AWS 文档侧重于 SageMaker 中从培训到部署的端到端,给人的印象是必须在 sagemaker 上进行培训。 AWS documentation and examples should have clear separation among Training in Estimator, Saving and loading model, and Deployment model to SageMaker Endpoint. AWS 文档和示例应明确区分 Estimator 中的训练、保存和加载 model 以及将 model 部署到 SageMaker 端点。

SageMaker Model SageMaker Model

You need to create the AWS::SageMaker::Model resource which refers to the "model" you have trained and more .您需要创建AWS::SageMaker::Model资源,它指的是您已训练的“模型”等等 AWS::SageMaker::Model is in CloudFormation document but it is only to explain what AWS resource you need. AWS::SageMaker::Model 在 CloudFormation 文档中,但它只是说明您需要什么 AWS 资源。

CreateModel API creates a SageMaker model resource. CreateModel API 创建 SageMaker model 资源。 The parameters specifie the docker image to use, model location in S3, IAM role to use, etc. See How SageMaker Loads Your Model Artifacts .参数指定要使用的 docker 映像、S3 中的 model 位置、要使用的 IAM 角色等。请参阅SageMaker 如何加载您的 Model Artif

Docker image Docker 图像

Obviously you need the framework eg ScikitLearn, TensorFlow, PyTorch, etc that you used to train your model to get inferences.显然,您需要用于训练 model 的框架,例如 ScikitLearn、TensorFlow、PyTorch 等。 You need a docker image that has the framework, and HTTP front end to respond to the prediction calls.您需要具有框架的 docker 图像和 HTTP 前端来响应预测调用。 See SageMaker Inference Toolkit and Using the SageMaker Training and Inference Toolkits .请参阅SageMaker 推理工具包使用 SageMaker 训练和推理工具包

To build the image is not easy.建立形象并不容易。 Hence AWS provides pre-built images called AWS Deep Learning Containers and available images are in Github .因此,AWS 提供了称为AWS 深度学习容器的预构建图像,可用图像位于Github中。

If your framework and the version is listed there, you can use it as the image.如果那里列出了您的框架和版本,则可以将其用作图像。 Otherwise you need to build by yourself.否则,您需要自己构建。 See Building a docker container for training/deploying our classifier .请参阅构建 docker 容器以训练/部署我们的分类器

SageMaker Python SDK for Frameworks用于框架的 SageMaker Python SDK

Create SageMaker Model by yourself using API is hard.使用 API 自己创建 SageMaker Model 很难。 Hence AWS SageMaker Python SDK has provided utilities to create the SageMaker models for several frameworks.因此,AWS SageMaker Python SDK 提供了用于为多个框架创建 SageMaker 模型的实用程序。 See Frameworks for available frameworks.有关可用框架,请参阅框架 If it is not there, you may still be able to use sagemaker.model.FrameworkModel and Model to load your trained model.如果它不存在,您仍然可以使用sagemaker.model.FrameworkModelModel来加载经过训练的 model。 For your case, see Using Scikit-learn with the SageMaker Python SDK .对于您的情况,请参阅将 Scikit-learn 与 SageMaker Python SDK 一起使用

model.tar.gz model.tar.gz

For instance if you used PyTorch and save the model as model.pth.例如,如果您使用 PyTorch 并将 model 保存为 model.pth。 To load the model and the inference code to get the prediction from the model, you need to create a model.tar.gz file.要加载 model 和推理代码以从 model 获取预测,您需要创建 model.tar.gz 文件。 The structure inside the model.tar.gz is explained in Model Directory Structure . model.tar.gz 内部的结构在Model 目录结构中进行了说明。 If you use Windows, beware of the CRLF to LF.如果您使用 Windows,请注意 CRLF 到 LF。 AWS SageMaker runs in *NIX environment. AWS SageMaker 在 *NIX 环境中运行。 See Create the directory structure for your model files .请参阅为 model 文件创建目录结构

|- model.pth        # model file is inside / directory.
|- code/            # Code artefacts must be inside /code
  |- inference.py   # Your inference code for the framework
  |- requirements.txt  # only for versions 1.3.1 and higher. Name must be "requirements.txt"

Save the tar.gz file in S3.将 tar.gz 文件保存在 S3 中。 Make sure of the IAM role to access the S3 bucket and objects.确保 IAM 角色可以访问 S3 存储桶和对象。

Loading model and get inference加载 model 并得到推断

See Create a PyTorchModel object .请参阅创建 PyTorchModel object When instantiating the PyTorchModel class, SageMaker automatically selects the AWS Deep Learning Container image for PyTorch for the version specified in framework_version .在实例化 PyTorchModel class 时,SageMaker 会自动为framework_version中指定的版本选择 PyTorch 的 AWS 深度学习容器映像。 If the image for the version does not exist, then it fails.如果该版本的图像不存在,则它会失败。 This has not been documented in AWS but need to be aware of.这尚未在 AWS 中记录,但需要注意。 SageMaker then internally calls the CreateModel API with the S3 model file location and the AWS Deep Learning Container image URL.然后,SageMaker 使用 S3 model 文件位置和 AWS 深度学习容器映像 URL 在内部调用 CreateModel API。

import sagemaker
from sagemaker import get_execution_role
from sagemaker.pytorch import PyTorchModel
from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer

role = sagemaker.get_execution_role()  # IAM role to run SageMaker, access S3 and ECR
model_file = "s3://YOUR_BUCKET/YOUR_FOLDER/model.tar.gz"   # Must be ".tar.gz" suffix


class AnalysisClass(RealTimePredictor):
    def __init__(self, endpoint_name, sagemaker_session):
        super().__init__(
            endpoint_name,
            sagemaker_session=sagemaker_session,
            serializer=json_serializer,
            deserializer=json_deserializer,   # To be able to use JSON serialization
            content_type='application/json'   # To be able to send JSON as HTTP body
        )

model = PyTorchModel(
    model_data=model_file,
    name='YOUR_MODEL_NAME_WHATEVER',
    role=role,
    entry_point='inference.py',
    source_dir='code',              # Location of the inference code
    framework_version='1.5.0',      # Availble AWS Deep Learning PyTorch container version must be specified
    predictor_cls=AnalysisClass     # To specify the HTTP request body format (application/json)
)

predictor = model.deploy(
    initial_instance_count=1,
    instance_type='ml.m5.xlarge'
)

test_data = {"body": "YOUR PREDICTION REQUEST"}
prediction = predictor.predict(test_data)

By default, SageMaker uses NumPy as the serialization format.默认情况下,SageMaker 使用 NumPy 作为序列化格式。 To be able to use JSON, need to specify the serializer and content_type.为了能够使用 JSON,需要指定序列化器和 content_type。 Instead of using RealTimePredictor class, you can specify them to predictor.您可以将它们指定为预测器,而不是使用 RealTimePredictor class。

predictor.serializer=json_serializer
predictor.predict(test_data)

Or或者

predictor.serializer=None # As the serializer is None, predictor won't serialize the data
serialized_test_data=json.dumps(test_data) 
predictor.predict(serialized_test_data)

Inference code sample推理代码示例

See Process Model Input , Get Predictions from a PyTorch Model and Process Model Output . See Process Model Input , Get Predictions from a PyTorch Model and Process Model Output . The prediction request is sent as JSON in HTTP request body in this example.在本例中,预测请求在 HTTP 请求正文中作为 JSON 发送。

import os
import sys
import datetime
import json
import torch
import numpy as np

CONTENT_TYPE_JSON = 'application/json'

def model_fn(model_dir):
    # SageMaker automatically load the model.tar.gz from the S3 and 
    # mount the folders inside the docker container. The  'model_dir'
    # points to the root of the extracted tar.gz file.

    model_path = f'{model_dir}/'
    
    # Load the model
    # You can load whatever from the Internet, S3, wherever <--- Answer to your Question
    # NO Need to use the model in tar.gz. You can place a dummy model file.
    ...

    return model


def predict_fn(input_data, model):
    # Do your inference
    ...

def input_fn(serialized_input_data, content_type=CONTENT_TYPE_JSON):
    input_data = json.loads(serialized_input_data)
    return input_data


def output_fn(prediction_output, accept=CONTENT_TYPE_JSON):
    if accept == CONTENT_TYPE_JSON:
        return json.dumps(prediction_output), accept
    raise Exception('Unsupported content type') 

Note笔记

SageMaker team keeps changing the implementations and the documentations are frequently obsolete. SageMaker 团队不断更改实施,文档经常过时。 When you are sure you did follow the documents and it does not work, obsolete documentation is quite likely.当您确定您确实遵循了文档并且它不起作用时,很可能是过时的文档。 In such case, need to clarify with AWS support, or open an issue in the Github.在这种情况下,需要通过 AWS 支持进行澄清,或在 Github 中提出问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 AWS sagemaker 上部署预训练的 sklearn model? (端点停留在创建) - How do I deploy a pre trained sklearn model on AWS sagemaker? (Endpoint stuck on creating) 如何在 amazon sagemaker 中加载训练有素的 model? - How to load trained model in amazon sagemaker? 如何在 AWS sagemaker 中运行预训练的 model? - how to run a pre-trained model in AWS sagemaker? 如何使用 Sagemaker 中经过训练的图像分类模型对通过网页上传的图像进行推理? - How can I use the trained image classification model in Sagemaker to perform inference on images upload through the web page? 在AWS SageMaker上重新托管经过训练的模型 - Re-hosting a trained model on AWS SageMaker 在本地加载 Amazon Sagemaker NTM 模型以进行推理 - Load Amazon Sagemaker NTM model locally for inference 如何通过 AWS Lambda ZC1C425268E17985D1AB5074 对 AWS SageMaker 上托管的 keras model 进行推断? - How to make inference to a keras model hosted on AWS SageMaker via AWS Lambda function? 如何在 AWS sagemaker 中自定义编码推理管道? - How to custom code an inference pipeline in AWS sagemaker? 如何使用新训练的 Model 更新 Sagemaker Endpoint? - How to update Sagemaker Endpoint with the newly Trained Model? 如何使用 AWS SageMaker Notebook 实例部署预训练的 model? - How to deploy a Pre-Trained model using AWS SageMaker Notebook Instance?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM