简体   繁体   中英

What's the difference between regular and ml AWS EC2 instances?

I'm experimenting with AWS Sagemaker using a Free Tier account. According to the Sagemaker pricing , I can use 50 hours of m4.xlarge and m5.xlarge instances for training in the free tier. (I am safely within the two-month limit.) But when I attempt to train an algorithm with the XGBoost container using m5.xlarge, I get the error shown below the code.

Are the ml-type and non-ml-type instances the same with just a fancy prefix for those that one would use with Sagemaker or are they entirely different? The EC2 page doesn't even list the ml instances.

sess = sagemaker.Session()

xgb = sagemaker.estimator.Estimator(container,
                                    role, 
                                    instance_count=1, 
                                    instance_type='m5.xlarge',
                                    output_path=output_location,
                                    sagemaker_session=sess)

ClientError: An error occurred (ValidationException) when calling the CreateTrainingJob operation: 1 validation error detected: Value 'm5.xlarge' at 'resourceConfig.instanceType' failed to satisfy constraint: Member must satisfy enum value set: [ml.p2.xlarge, ml.m5.4xlarge, ml.m4.16xlarge, ml.p4d.24xlarge, ml.c5n.xlarge, ml.p3.16xlarge, ml.m5.large, ml.p2.16xlarge, ml.c4.2xlarge, ml.c5.2xlarge, ml.c4.4xlarge, ml.c5.4xlarge, ml.c5n.18xlarge, ml.g4dn.xlarge, ml.g4dn.12xlarge, ml.c4.8xlarge, ml.g4dn.2xlarge, ml.c5.9xlarge, ml.g4dn.4xlarge, ml.c5.xlarge, ml.g4dn.16xlarge, ml.c4.xlarge, ml.g4dn.8xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5.18xlarge, ml.p3dn.24xlarge, ml.p3.2xlarge, ml.m5.xlarge, ml.m4.10xlarge, ml.c5n.9xlarge, ml.m5.12xlarge, ml.m4.xlarge, ml.m5.24xlarge, ml.m4.2xlarge, ml.p2.8xlarge, ml.m5.2xlarge, ml.p3.8xlarge, ml.m4.4xlarge]

The instances with the ml prefix are instance classes specifically for use in Sagemaker.

In addition to being used within the Sagemaker service, the instance will be running an AMI with all the necessary libraries and packages such as Jupyter.

The AWS EC2 instances with the ml prefix are specific EC2 instance classes for use in AWS Sagemaker .

ml instances are optimized for machine learning. AWS sagemaker is also great to sharing code to setup machine learning, because other people can use the same ml instances. It requires more code and manual work to setup ec2 for machine learning and sharing becomes more difficult.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM