简体   繁体   中英

Is EKS Nodegroup really necessary

I have a few questions on EKS node groups.

  1. I dont understand the concept of Node group and why is it required. Can't we create an EC2 and run kubeadm join for joining EC2 node to EKS Cluster. What advantage does node group hold.

  2. Does node groups (be it managed or self-managed) have to exist in same VPC of EKS cluster. Is it not possible to create node group in another VPC. If so how?

  1. managed node groups is a way to let AWS manage part of the lifecycle of the Kube.netes nodes. For sure you are allowed to continue to configure self managed nodes if you need/want to. To be fair you can also spin up a few EC2 instances and configure your own K8s control plane. It boils down to how much you wanted managed Vs how much you want to do yourself. The other extreme on this spectrum would be to use Fargate which is a fully managed experience (where there are no nodes to scale, configure, no AMIs etc).
  2. the EKS cluster (control plane) lives in a separate AWS managed account/VPC. See here . When you deploy a cluster EKS will ask you which su.nets (and which VPC) you want the EKS cluster to manifest itself (through ENIs that get plugged into your VPC/su.nets). That VPC is where your self managed workers, your managed node groups and your Fargate profiles need to be plugged into. You can't use another VPC to add capacity to the cluster.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM