简体   繁体   中英

GKE autopilot with shared vpc ip exhausted

I have setup a new su.net in my shared VPC for GKE autopilot as the following:

node ip: 10.11.1.0/24
first secondary ip: 10.11.2.0/24
second secondary ip: 10.11.3.0/24

I tried to test it by running simple nginx images with 30 replicas.

based on my understanding:

I have 256 possible node IP
I have 256 possible pod IP
I have 256 possible service IP

after deploying, somehow my k8s are stuck with only 2 pods deployed and running. the rest is just in pending state with error code: IP_SPACE_EXHAUSTED

my question is how come? I still have plenty IP address, this is fresh deployed kube.netes cluster.

Pod CIDR ranges in Autopilot clusters

The default settings for Autopilot cluster CIDR sizes are as follows:

  • Su.network range: /23
  • Secondary IP address range for Pods: /17
  • Secondary IP address range for Services: /22

Autopilot has a maximum Pods per node of 32, you may check this link .

Autopilot cluster maximum number of nodes is pre-configured and immutable, you may check this link .

Autopilot sets "max pods per node" to 32 . This results in a /26 (64 IP addresses) being assigned to each Autopilot node from the Pod secondary IP range. Since your Pod range is a /24, this means your Autopilot cluster can support a max of 4 nodes.

By default, Autopilot clusters start with 2 nodes (one runs some system stuff). Looks like your pods did not fit on either of these nodes, so Autopilot provisioned new nodes as required. Generally, Autopilot tries to find the best fit node sizes for your deployments and in this case looks like you ended up with a pod per node.

I'd recommend a /17 or a /16 for your Pod range to maximize the number of nodes.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM