简体   繁体   中英

Infrastructure used in Amazon EKS

I was looking into a demo of an application built on Amazons kubernetes service, EKS. However, I am struggling to understand what infrastructure is used underneath, as I don't have access to AWS directly.

My understanding.

  1. You define a cluster, regardless of whether you use it there is a cost, so I suspect there is a master node always up.
  2. While a job is running, you pay VM costs, so it is clear that it runs on VMs

Now my question:

What happens when you spin up and down?

First of all, do the VM's spin down exactly to what you need, or is there always some part up to allow you to scale up quickly?

Secondly, if the VM spins down, does that mean the instance is terminated or just stopped.


I noticed that the scaling up happens in a few seconds, so this makes me doubt that VMs are actually made each time when you spin up.

Your understanding is roughly correct. There is a 'control plane' that is managed by Amazon with EKS (effectively the master nodes for the kubernetes cluster). This is invisible to you as the AWS account holder, and you can't get to these underlying machines yourself. Amazon charges a flat rate for this, and you can't scale it back/down to lower costs.

You pay $0.20 per hour for each Amazon EKS cluster that you create.

Your second point around jobs running is not very clear. You don't necessarily run jobs in Kubernetes - you run containers in pods (and you can also run 'jobs' which are also pods with a limited lifespan based on when a process completes).

By default, you need to create 'worker groups' for your EKS cluster. How you create these is up to you.

Generally, you create an autoscale group for each worker group, and you can define yourself how that autoscale group scales worker nodes in your cluster out and in. These are classic EC2 VMs as you guessed, and you can access them with SSH or SSM for example. (Managed by you).

So to scale the worker groups that run your container workloads, you can either hand scale them up and down, rely on autoscale group metrics to scale them in/out or you can use a bespoke solution like cluster-autoscaler to more intelligently scale them in and out based on what your containers in the cluster are doing.

So generally when an autoscale group / worker group scales in, it will terminate the EC2 instance. When a new one comes up, your launch configuration for the worker group should have everything it needs to know to allow the new instance to auto-join the EKS cluster and begin scheduling pods.

So yes, VMs are indeed made/started/provisioned when the worker group scales out. If they're linux based EKS worker nodes these normally start fairly quickly. Windows ones are generally a bit slower.

To answer your other question - VMs spin down to what you need only if you've configured your scaling mechanisms carefully and to your own requirements. Cluster-autoscaler helps a lot with this.

Hope that helps clear things up for you.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM