简体   繁体   中英

I have limited IP space in my AWS VPC. How do I setup Kubernetes in AWS where worker nodes and control plane are in different subnets

I have limited IP's in my public facing VPC which basically means I cannot run the K8S worker nodes in that VPC since I would not have sufficient IP's to support all the pods. My requirement is to run the control plane in my public facing VPC and the worker nodes in a different VPC with a private IP range (192.168.XX).

We use traefik for ingress and have deployed traefik as a DaemonSet. These pods are exposed using a Kubernetes service of type NLB. And we created a VPC endpoint on top of this NLB which allows us to access this traefik endpoint through our public facing VPC.

However, based on docs it looks like NLB support is still in alpha stage. I am curious what are my other options given the above constraints.

Usually, in Kubernetes cluster Pods are running in separate overlay subnet that should not overlap with existing IP subnets in VPC.
This functionality is provided by Kubernetes cluster networking solutions like Calico, Flannel, Weave, etc .
So, you only need to have enough IP address space to support all cluster nodes.

The main benefit of using NLB is to expose client IP address to pods, so if there are no such requirements, regular ELB would be good for most cases.

You can add secondary CIDR to your vpc and use one of the two options mentioned here to have pods use the secondary vpc CIDR. https://aws.amazon.com/blogs/containers/optimize-ip-addresses-usage-by-pods-in-your-amazon-eks-cluster/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM