简体   繁体   English

如何在 AWS sagemaker 中运行预训练的 model?

[英]how to run a pre-trained model in AWS sagemaker?

I have a model.pkl file which is pre-trained and all other files related to the ml model.我有一个经过预训练的 model.pkl 文件以及与 ml model 相关的所有其他文件。 I want it to deploy it on the aws sagemaker.我希望它将它部署在 aws sagemaker 上。 But without training, how to deploy it to the aws sagmekaer, as fit() method in aws sagemaker run the train command and push the model.tar.gz to the s3 location and when deploy method is used it uses the same s3 location to deploy the model, we don't manual create the same location in s3 as it is created by the aws model and name it given by using some timestamp.但是没有培训,如何将其部署到 aws sagmekaer,因为 aws sagemaker 中的 fit() 方法运行 train 命令并将 model.tar.gz 推送到 s3 位置,当使用部署方法时,它使用相同的 s3 位置部署 model,我们不会在 s3 中手动创建与由 aws model 创建的位置相同的位置,并使用一些时间戳给它命名。 How to put out our own personalized model.tar.gz file in the s3 location and call the deploy() function by using the same s3 location.如何将我们自己的个性化 model.tar.gz 文件放在 s3 位置,并使用相同的 s3 位置调用 deploy() function。

All you need is:所有你需要的是:

  1. to have your model in an arbitrary S3 location in a model.tar.gz archive将您的 model 放在model.tar.gz存档中的任意 S3 位置
  2. to have an inference script in a SageMaker-compatible docker image that is able to read your model.pkl , serve it and handle inferences.在 SageMaker 兼容的 docker 映像中拥有一个推理脚本,该映像能够读取您的model.pkl ,为其提供服务并处理推理。
  3. to create an endpoint associating your artifact to your inference code创建将您的工件与推理代码相关联的端点

When you ask for an endpoint deployment, SageMaker will take care of downloading your model.tar.gz and uncompressing to the appropriate location in the docker image of the server, which is /opt/ml/model当您请求端点部署时,SageMaker 将负责下载您的model.tar.gz并解压缩到服务器的 docker 映像中的适当位置,即/opt/ml/model

Depending on the framework you use, you may use either a pre-existing docker image (available for Scikit-learn, TensorFlow, PyTorch, MXNet) or you may need to create your own. Depending on the framework you use, you may use either a pre-existing docker image (available for Scikit-learn, TensorFlow, PyTorch, MXNet) or you may need to create your own.

  • Regarding custom image creation, see here the specification and here two examples of custom containers for R and sklearn (the sklearn one is less relevant now that there is a pre-built docker image along with a sagemaker sklearn SDK )关于自定义图像创建,请参见此处的规范和此处的两个自定义容器示例,用于Rsklearn (sklearn 的相关性较低,因为有一个预先构建的 docker 图像以及一个sagemaker sklearn ZF20E1D3D3D40573127E9EE0480CAF1283D6Z和 sklearn 的相关性)
  • Regarding leveraging existing containers for Sklearn, PyTorch, MXNet, TF, check this example: Random Forest in SageMaker Sklearn container .关于利用 Sklearn、PyTorch、MXNet、TF 的现有容器,请查看以下示例: SageMaker Sklearn 容器中的随机森林 In this example, nothing prevents you from deploying a model that was trained elsewhere.在此示例中,没有什么可以阻止您部署在其他地方训练的 model。 Note that with a train/deploy environment mismatch you may run in errors due to some software version difference though.请注意,由于训练/部署环境不匹配,您可能会由于某些软件版本差异而运行错误。

Regarding your following experience:关于您的以下经历:

when deploy method is used it uses the same s3 location to deploy the model, we don't manual create the same location in s3 as it is created by the aws model and name it given by using some timestamp当使用部署方法时,它使用相同的 s3 位置来部署 model,我们不会在 s3 中手动创建与由 aws model 创建的位置相同的位置,并使用一些时间戳给它命名

I agree that sometimes the demos that use the SageMaker Python SDK (one of the many available SDKs for SageMaker) may be misleading, in the sense that they often leverage the fact that an Estimator that has just been trained can be deployed ( Estimator.deploy(..) ) in the same session, without having to instantiate the intermediary model concept that maps inference code to model artifact.我同意有时使用SageMaker Python SDK (SageMaker 的众多可用 SDK 之一)的演示可能会产生误导,因为它们经常利用可以部署刚刚训练的Estimator的事实 ( Estimator.deploy(..) ) Estimator.deploy(..) ) 在同一个 session 中,无需实例化将推理代码映射到 model 工件的中间 model 概念。 This design is presumably done on behalf of code compacity, but in real life, training and deployment of a given model may well be done from different scripts running in different systems.这种设计大概是为了代码紧凑性而完成的,但在现实生活中,给定 model 的训练和部署很可能通过在不同系统中运行的不同脚本来完成。 It's perfectly possible to deploy a model with training it previously in the same session, you need to instantiate a sagemaker.model.Model object and then deploy it. It's perfectly possible to deploy a model with training it previously in the same session, you need to instantiate a sagemaker.model.Model object and then deploy it.

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 AWS SageMaker Notebook 实例部署预训练的 model? - How to deploy a Pre-Trained model using AWS SageMaker Notebook Instance? 借助 AWS SageMaker,是否可以使用 sagemaker SDK 部署预训练模型? - With AWS SageMaker, is it possible to deploy a pre-trained model using the sagemaker SDK? 在 aws sagemaker 上部署预训练的 tensorflow model - ModelError:调用 InvokeEndpoint 操作时发生错误 (ModelError) - Deploy pre-trained tensorflow model on the aws sagemaker - ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation 在AWS上加载预训练的模型 - Load pre-trained model on AWS AWS-DeepLens:如何在 S3 中访问用于对象识别的预训练模型 - AWS-DeepLens: How to access the pre-trained model for object recognition in S3 如何在 AWS sagemaker 上部署预训练的 sklearn model? (端点停留在创建) - How do I deploy a pre trained sklearn model on AWS sagemaker? (Endpoint stuck on creating) AWS SageMaker - 如何加载经过训练的 sklearn model 以进行推理? - AWS SageMaker - How to load trained sklearn model to serve for inference? 在AWS SageMaker上重新托管经过训练的模型 - Re-hosting a trained model on AWS SageMaker 如何根据预定义的计划运行 AWS Sagemaker Studio 作业 - How to run AWS Sagemaker Studio job based on pre defined schedule 将 Picked 或 Joblib 预训练的 ML 模型加载到 Sagemaker 并作为端点托管 - Load a Picked or Joblib Pre trained ML Model to Sagemaker and host as endpoint
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM