[英]GKE Insufficient CPU for small Node.js app pods
So on GKE I have a Node.js app
which for each pod uses about: CPU(cores): 5m, MEMORY: 100Mi
所以在GKE上我有一个Node.js app
,每个pod使用的: CPU(cores): 5m, MEMORY: 100Mi
However I am only able to deploy 1 pod of it per node. 但是我只能为每个节点部署一个pod。 I am using the GKE n1-standard-1
cluster which has 1 vCPU, 3.75 GB
per node. 我使用的是GKE n1-standard-1
集群,每个节点有1 vCPU, 3.75 GB
。
So in order to get 2 pods of app
up total = CPU(cores): 10m, MEMORY: 200Mi
, it requires another entire +1 node = 2 nodes = 2 vCPU, 7.5 GB
to make it work. 因此,为了获得2个app
总数= CPU(cores): 10m, MEMORY: 200Mi
,它需要另外整个+1节点= 2个节点= 2 vCPU, 7.5 GB
才能使其工作。 If I try to deploy those 2 pods on the same single node, I get insufficient CPU
error. 如果我尝试在同一个节点上部署这两个pod,我的insufficient CPU
错误就会insufficient CPU
。
I have a feeling I should actually be able to run a handful of pod replicas (like 3 replicas and more) on 1 node of f1-micro
(1 vCPU, 0.6 GB) or f1-small
(1 vCPU, 1.7 GB), and that I am way overprovisioned here, and wasting my money. 我有一种感觉,我实际上应该能够在f1-micro
(1个vCPU,0.6 GB)或f1-small
(1个vCPU,1.7 GB)的1个节点上运行一些pod复制品(如3个复制品和更多复制品),并且我在这里过度供应,浪费我的钱。
But I am not sure why I seem so restricted by insufficient CPU
. 但我不确定为什么我看起来因insufficient CPU
而受到限制。 Is there some config I need to change? 我需要更改一些配置吗? Any guidance would be appreciated. 任何指导将不胜感激。
Allocatable:
cpu: 940m
ephemeral-storage: 47093746742
hugepages-2Mi: 0
memory: 2702216Ki
pods: 110
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default mission-worker-5cf6654687-fwmk4 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default mission-worker-5cf6654687-lnwkt 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-gcp-v3.1.1-5b6km 100m (10%) 1 (106%) 200Mi (7%) 500Mi (18%)
kube-system kube-dns-76dbb796c5-jgljr 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%)
kube-system kube-proxy-gke-test-cluster-pool-1-96c6d8b2-m15p 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system metadata-agent-nb4dp 40m (4%) 0 (0%) 50Mi (1%) 0 (0%)
kube-system prometheus-to-sd-gwlkv 1m (0%) 3m (0%) 20Mi (0%) 20Mi (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 701m (74%) 1003m (106%)
memory 380Mi (14%) 690Mi (26%)
Events: <none>
After the deployment, check the node capacities with kubectl describe nodes
. 部署后,使用kubectl describe nodes
检查节点容量。 For eg: In the code example at the bottom of the answer: 例如:在答案底部的代码示例中:
Allocatable cpu: 1800m 可分配的CPU:1800米
Already used by pods in kube-system namespace: 100m + 260m + +100m + 200m + 20m = 680m 已经被kube-system命名空间中的pod使用:100m + 260m + + 100m + 200m + 20m = 680m
Which means 1800m - 680m = 1120m is left for you to use 这意味着1800m - 680m = 1120m留给您使用
So, if your pod or pods request for more than 1120m cpu, they will not fit on this node 因此,如果您的pod或pod请求超过1120m cpu,它们将不适合此节点
So in order to get 2 pods of app up total = CPU(cores): 10m, MEMORY: 200Mi, it requires another entire +1 node = 2 nodes = 2 vCPU, 7.5 GB to make it work. 因此,为了获得2个应用程序总数= CPU(核心):10米,MEMORY:200Mi,它需要另外整个+1节点= 2个节点= 2个vCPU,7.5 GB才能使其工作。 If I try to deploy those 2 pods on the same single node, I get insufficient CPU error. 如果我尝试在同一个节点上部署这两个pod,我的CPU错误就会不足。
If you do the exercise described above, you will find your answer. 如果您进行上述练习,您将找到答案。 In case, there is enough cpu for your pods to use and still you are getting insufficient CPU error, check if you are setting the cpu request and limit params correctly. 如果你的pod有足够的CPU使用,你仍然没有足够的CPU错误,检查你是否正在设置cpu请求并正确限制params。 See here 看到这里
If you do all the above and still it's an issue. 如果你做了以上所有,仍然是一个问题。 Then, I think in your case, what could be happening is that you are allocating 5-10m cpu for your node app which is too less cpu to allocate. 然后,我认为在你的情况下,可能发生的是你为你的节点应用程序分配5-10m cpu,这是太少的CPU分配。 Try increasing that may be to 50m cpu. 尝试增加可能是50米CPU。
I have a feeling I should actually be able to run a handful of pod replicas (like 3 replicas and more) on 1 node of f1-micro (1 vCPU, 0.6 GB) or f1-small (1 vCPU, 1.7 GB), and that I am way overprovisioned here, and wasting my money. 我有一种感觉,我实际上应该能够在f1-micro(1个vCPU,0.6 GB)或f1-small(1个vCPU,1.7 GB)的1个节点上运行一些pod复制品(如3个复制品和更多复制品),并且我在这里过度供应,浪费我的钱。
Again, do the exercise describe above to conclude that 再次,做上面描述的练习得出结论
Name: e2e-test-minion-group-4lw4
[ ... lines removed for clarity ...]
Capacity:
cpu: 2
memory: 7679792Ki
pods: 110
Allocatable:
cpu: 1800m
memory: 7474992Ki
pods: 110
[ ... lines removed for clarity ...]
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.