简体   繁体   中英

On-Premises Hosting of Trained DeepAR Model built on AWS SageMaker

I have started working with AWS SageMaker recently with the examples provided by AWS. I used this example ( DeepAR Model) in order to forecast a time series. After training, a model artifacts file has been created in my S3 bucket.

My question: Is there a way to host that trained model in a own hosting environment? (client premises)

Except SageMaker XGBoost, SageMaker built-in algorithms are not designed to be used out of Amazon. That does not mean that it's impossible, for example you can find here and there snippets peeking inside model artifacts (eg for Factorization Machines and Neural Topic Model ) but these things can be hacky and are usually not part of official service features. Regarding DeepAR specifically, the model was open-sourced couple weeks ago as part of gluon-ts python package ( blog post , code ) so if you develop a model specifically for your own hosting environment I'd recommend to use that gluon-ts code in the MXNet container, so that you'll be able to open and read the artifact out of SageMaker.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM