简体   繁体   English

官方文档中每个节点的 Kubernetes pod 的 GKE 不匹配限制

[英]GKE mismatch limit of Kubernetes pods per node from official documentation

I am sizing a small kubernetes cluster in Google Cloud Platform, my reference is the following documentation: https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr#overview我在谷歌云平台上调整一个小型 kubernetes 集群,我的参考是以下文档: https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr#overview

So I have所以我有

  • 3 nodes 3个节点
  • /24 for pods /24 用于吊舱
  • /25 for services /25 用于服务
  • 16 pods per node set in the cluster creation集群创建中每个节点集 16 个 pod

When I deploy the cluster and spin up a nginx replicas of PODs I can only reach maximum to 30 while I would expect to reach 48 PODs.当我部署集群并启动 POD 的 nginx 副本时,我最多只能达到 30 个,而我希望达到 48 个 POD。

According to Google documentation I should have a /27 (that I can see assigned on each node) and a range of nodes between 9-16.根据谷歌文档,我应该有一个 /27(我可以看到在每个节点上分配)和 9-16 之间的节点范围。 Now while an average of 10 nodes is fair considering the 9-16 range, I don't understand why it doesn't scale up above that number.现在,虽然考虑到 9-16 范围,平均 10 个节点是公平的,但我不明白为什么它没有扩大到这个数字以上。

Here is the code for your review, I wasn't able to see if there is any other limitation:这是供您查看的代码,我无法查看是否有任何其他限制:

gcloud compute networks subnets create $SERVICE_PROJECT1_SUB_K8S_NODES \
--network $SHAREDVPC --region $REGION \
--range 10.222.5.32/28 --secondary-range \
$SERVICE_PROJECT1_SUB_K8S_PODS=10.222.6.0/24, \
$SERVICE_PROJECT1_SUB_K8S_SERVICES=10.222.5.128/25 \
--enable-private-ip-google-access
gcloud beta container clusters create service1-k8s-cluster \
--zone $REGION \
--network projects/$HOST_PROJECT_ID/global/networks/$SHAREDVPC \
--subnetwork projects/$HOST_PROJECT_ID/regions/$REGION/subnetworks/$SERVICE_PROJECT1_SUB_K8S_NODES \
--cluster-secondary-range-name $SERVICE_PROJECT1_SUB_K8S_PODS \
--services-secondary-range-name $SERVICE_PROJECT1_SUB_K8S_SERVICES \
--enable-master-authorized-networks \
--master-authorized-networks 10.222.1.0/24 \
--enable-ip-alias \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr $SERVICE_PROJECT1_SUB_K8S_MASTER \
--no-enable-basic-auth \
--no-issue-client-certificate \
--enable-master-global-access \
--num-nodes 1 \
--default-max-pods-per-node 16 \
--max-pods-per-node 16 \
--machine-type n1-standard-2

Error I see in a POD我在 POD 中看到的错误

Events:
  Type     Reason             Age                    From                Message
  ----     ------             ----                   ----                -------
  Normal   NotTriggerScaleUp  4m53s (x151 over 29m)  cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added):
  Warning  FailedScheduling   8s (x22 over 29m)      default-scheduler   0/3 nodes are available: 3 Insufficient pods.

You will never reach that 48 threshold, there are some IPs that are used for daemon sets that will prevent you from reaching the high threshold you set for yourself for example in my cluster I have the following您永远不会达到 48 个阈值,有一些用于守护程序集的 IP 会阻止您达到您为自己设置的高阈值,例如在我的集群中我有以下

 kube-system                fluentd-gcp-v3.1.1-grkv8                                          100m (1%)     1 (12%)     200Mi (0%)       500Mi (1%)     10d
  kube-system                kube-proxy-gke-eng-e2e-main-gke-e2e-n1-highmem-8-501281f5-9ck0    100m (1%)     0 (0%)      0 (0%)           0 (0%)         3d19h
  kube-system                network-metering-agent-ck74l                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d
  kube-system                prometheus-to-sd-qqsn6                                            1m (0%)       3m (0%)     20Mi (0%)        37Mi (0%)      10d
  monitor                    prometheus-prometheus-node-exporter-8229c                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  neuvector                  neuvector-enforcer-pod-p79j5                                      100m (1%)     2 (25%)     128Mi (0%)       1Gi (2%)       11d

This is for every node the daemon sets deploys these pods on every node, efectively reducing by 6 the amount of IPs available to my application pods.这是针对每个节点,守护程序集在每个节点上部署这些 pod,有效地将我的应用程序 pod 可用的 IP 数量减少了 6。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM