[英]Kubernetes pods are not spreaded across different nodes
I have a Kubernetes cluster on GKE. 我在GKE上有一个Kubernetes集群。 I know Kubernetes will spread pods with the same labels, but this isn't happening for me. 我知道Kubernetes将使用相同的标签传播豆荚,但这不会发生在我身上。 Here is my node description. 这是我的节点描述。
Name: gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 27 May 2016 21:11:17 -0400 Thu, 26 May 2016 22:16:27 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
Ready True Fri, 27 May 2016 21:11:17 -0400 Thu, 26 May 2016 22:17:02 -0400 KubeletReady kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
cpu: 2
memory: 1848660Ki
pods: 110
System Info:
Machine ID:
Kernel Version: 3.16.0-4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://1.9.1
Kubelet Version: v1.2.4
Kube-Proxy Version: v1.2.4
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob 80m (4%) 0 (0%) 200Mi (11%) 200Mi (11%)
kube-system kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob 20m (1%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
100m (5%) 0 (0%) 200Mi (11%) 200Mi (11%)
No events.
Name: gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 27 May 2016 21:11:17 -0400 Fri, 27 May 2016 18:16:38 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
Ready True Fri, 27 May 2016 21:11:17 -0400 Fri, 27 May 2016 18:17:12 -0400 KubeletReady kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
pods: 110
cpu: 2
memory: 1848660Ki
System Info:
Machine ID:
Kernel Version: 3.16.0-4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://1.9.1
Kubelet Version: v1.2.4
Kube-Proxy Version: v1.2.4
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default pn-minions-deployment-prod-3923308490-axucq 100m (5%) 0 (0%) 0 (0%) 0 (0%)
default pn-minions-deployment-prod-3923308490-mvn54 100m (5%) 0 (0%) 0 (0%) 0 (0%)
default pn-minions-deployment-staging-2522417973-8cq5p 100m (5%) 0 (0%) 0 (0%) 0 (0%)
default pn-minions-deployment-staging-2522417973-9yatt 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2 80m (4%) 0 (0%) 200Mi (11%) 200Mi (11%)
kube-system heapster-v1.0.2-1246684275-a8eab 150m (7%) 150m (7%) 308Mi (17%) 308Mi (17%)
kube-system kube-dns-v11-uzl1h 310m (15%) 310m (15%) 170Mi (9%) 920Mi (50%)
kube-system kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2 20m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-v1.0.1-3co2b 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%)
kube-system l7-lb-controller-v0.6.0-o5ojv 110m (5%) 110m (5%) 70Mi (3%) 120Mi (6%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
1170m (58%) 670m (33%) 798Mi (44%) 1598Mi (88%)
No events.
Here is the description for deployments: 以下是部署说明:
Name: pn-minions-deployment-prod
Namespace: default
Labels: app=pn-minions,environment=production
Selector: app=pn-minions,environment=production
Replicas: 2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets: <none>
NewReplicaSet: pn-minions-deployment-prod-3923308490 (2/2 replicas created)
Name: pn-minions-deployment-staging
Namespace: default
Labels: app=pn-minions,environment=staging
Selector: app=pn-minions,environment=staging
Replicas: 2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets: <none>
NewReplicaSet: pn-minions-deployment-staging-2522417973 (2/2 replicas created)
As you can see, all four pods are on the same node. 如您所见,所有四个pod都在同一节点上。 Should I do something in additional to make this work? 我应该做些额外的工作吗?
By default, pods run with unbounded CPU and memory limits. 默认情况下,pod以无限制的CPU和内存限制运行。 This means that any pod in the system will be able to consume as much CPU and memory on the node that executes the pod. 这意味着系统中的任何pod都将能够在执行pod的节点上消耗尽可能多的CPU和内存。 http://kubernetes.io/docs/admin/limitrange/ http://kubernetes.io/docs/admin/limitrange/
When you don't specify the CPU limit kubernetes will not know how much CPU resources are required and will try to create pods in one node. 如果未指定CPU限制,kubernetes将不知道需要多少CPU资源,并将尝试在一个节点中创建pod。
Here is an example of Deployment
以下是Deployment
的示例
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 4
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: quay.io/naveensrinivasan/jenkins:0.4
ports:
- containerPort: 8080
resources:
limits:
cpu: "400m"
# volumeMounts:
# - mountPath: /var/jenkins_home
# name: jenkins-volume
# volumes:
# - name: jenkins-volume
# awsElasticBlockStore:
# volumeID: vol-29c4b99f
# fsType: ext4
imagePullSecrets:
- name: registrypullsecret
Here is the output of the kubectl describe po | grep Node
这是kubectl describe po | grep Node
的输出 kubectl describe po | grep Node
after creating the deployment. 创建部署后的kubectl describe po | grep Node
。
~ aws_kubernetes naveen@GuessWho ~/revature/devops/jenkins jenkins ● k describe po | grep Node
Node: ip-172-20-0-26.us-west-2.compute.internal/172.20.0.26
Node: ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29
Node: ip-172-20-0-27.us-west-2.compute.internal/172.20.0.27
Node: ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29
It is now created in 4 different nodes. 它现在在4个不同的节点中创建。 It is based on cpu limits on your cluster. 它基于群集上的cpu限制。 You could increase / decrease replicas
to see it being deployed in different nodes. 您可以增加/减少replicas
以查看它是否部署在不同的节点中。
This isn't GKE or AWS specific. 这不是GKE或AWS特定的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.