[英]Kubernetes MaxVolumeCount is less than max pods per node
I've just upgraded my Kubernetes cluster to version 1.7.11. 我刚刚将Kubernetes集群升级到1.7.11版。 This increased the maximum number of pods I can run per node from 40 to 100. However it seems like now I can only attach 39 volumes per node.
这样,每个节点可以运行的Pod的最大数量从40个增加到100个。但是现在看来,每个节点只能附加39个卷。 If I try to create more I get:
如果我尝试创建更多内容,则会得到:
No nodes are available that match all of the following predicates:: MaxVolumeCount (3), PodToleratesNodeTaints (1).
This is rather annoying because I was hoping to be able to put more than 40 pods on a node. 这很烦人,因为我希望能够在一个节点上放置40多个Pod。 I don't want to decrease the node size because that would limit the max amount of CPU I can allow a pod to use.
我不想减小节点大小,因为那会限制我可以允许Pod使用的最大CPU数量。
I've setup my cluster on AWS using Kops. 我已经使用Kops在AWS上设置了集群。 Is there a way to change the MaxVolumeCount limit?
有没有办法更改MaxVolumeCount限制?
Is it normal to have a MaxVolumeCount limit of 39? MaxVolumeCount限制为39是否正常?
System info: 系统信息:
Kernel Version: 4.4.111-k8s
OS Image: Debian GNU/Linux 8 (jessie)
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.7.11
Kube-Proxy Version: v1.7.11
Operating system: linux
Architecture: amd64
Not every pod needs a volume mounted. 并非每个吊舱都需要安装卷。 If you look at ie.
如果你看ie。 https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html there are external factors at work here as the 40 EBS per instance is actually an AWS limitation, not kubernetes one.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html这里有外部因素在起作用,因为每个实例40个EBS实际上是一个AWS限制,而不是kubernetes限制。 ALso, not every Volume needs to be backed by EBS, you can have ie.
此外,并不是每个卷都需要EBS的支持,您可以拥有ie。 NFS (AWS EFS) which would not fall under the same limit.
NFS(AWS EFS)不会处于同一限制内。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.