简体   繁体   English

Kubernetes多节点集群中的CPU不足

[英]Insufficient cpu in Kubernetes multi node cluster

I am trying to deploy an application into my Kubernetes cluster. 我正在尝试将应用程序部署到我的Kubernetes集群中。 It is a multi node cluster. 它是一个多节点集群。 There are 3 m4.2xlrge aws instances. 3 m4.2xlrge aws实例。

m4.2xlarge
vCPU :- 8
Memory :- 32

Now, in my deployment.yaml file for that service, I have mentioned 现在,在我的deployment.yaml文件中,我提到过

limit:
  cpu: 11
request:
  cpu: 11

It is giving error, insufficient cpu and container is not scheduling. 它给出错误, insufficient cpu ,容器没有调度。 I have already (8*3)=24 CPU resources available and I requested for 11 CPU out of it. 我已经(8*3)=24 CPU资源可用,我要求11 CPU。 It should share the CPU resource across nodes. 它应该跨节点共享CPU资源。 Is the limit and request CPU is applicable for the containers per node? limitrequest CPU是否适用于每个节点的容器? That means, should I have atleast 11 CPU per aws instance? 这意味着,我应该每个aws实例至少有11 CPU吗?

A Pod is scheduled on a single Node. Pod在单个节点上进行调度。 The resource requests: help decide where it can be scheduled. 资源requests:帮助确定可以安排的位置。 If you say requests: {cpu: 11} then there must be some single node with 11 (unreserved) cores available; 如果你说requests: {cpu: 11}那么必须有一个单节点有11个(未预留的)核心可用; but if your cluster only has 8-core m4.2xlarge nodes, no single node will be able to support this. 但如果您的群集只有8核m4.2xlarge节点,则没有任何一个节点能够支持此节点。 Kubernetes can't “aggregate” cores across nodes in any useful way at this level. Kubernetes无法在此级别以任何有用的方式“聚合”跨节点的核心。

If you're requesting a lot of CPU because your process has a lot of threads to do concurrent processing, consider turning the number of threads down (maybe even to just 1) but then changing the replicas: in a Deployment spec to run many copies of it. 如果您要求大量CPU因为您的进程有很多线程要进行并发处理,请考虑将线程数减少(甚至可以只减少1),然后更改replicas:在部署规范中运行多个副本它的。 Each individual Pod will get scheduled on a single Node, but with many replicas you'll get many Pods which can be spread across the three Nodes. 每个Pod都将在一个节点上进行调度,但是有许多副本,您将获得许多可以分布在三个节点上的Pod。

If your process really needs more than 8 cores to run, then you need individual systems with more than 8 cores; 如果您的流程确实需要运行8个以上的内核,那么您需要具有8个以上内核的单个系统; consider an m4.4xlarge (same RAM-to-CPU ratio) or a c4.4xlarge (same total RAM, twice the cores). 考虑m4.4xlarge(相同的RAM-to-CPU比率)或c4.4xlarge(相同的总RAM,两倍的内核)。

When you specify a limit or request for a pod, it takes into account per node capacity of CPU or memory. 为pod指定limitrequest ,它会考虑每个节点的CPU或内存容量。 In other words you can't have a Pod requesting more CPU or Memory which is available on a single worker node of your cluster, if you do it will go in Pending state and will not come up until it finds a node matching the request of the Pod. 换句话说,您不能让Pod请求在群集的单个工作节点上可用的更多CPU或内存,如果这样做将进入Pending状态,并且在找到与request匹配的节点之前不会出现Pod。

In your case, worker node of size m4.2xlarge has 8 vCPUs, and in the deployment file you have requesed 11 vCPUs for the Pod. 在您的情况下,大小为m4.2xlarge工作节点具有8个vCPU,并且在部署文件中,您已为Pod重新获得11个vCPU。 This will never work even though you have 3 nodes of size m4.2xlarge. 即使您有3个大小为m4.2xlarge的节点,这也无法工作。 A Pod always get scheduled on a single worker Node so it doesn't matter if the aggregate CPU capacity of your cluster is more than 11 vCPUs because a Pod will only be able to consume resources from a single worker node. Pod始终在单个工作节点上进行调度,因此,如果群集的CPU总容量超过11个vCPU,则无关紧要,因为Pod只能使用来自单个工作节点的资源。

Hope this helps! 希望这可以帮助!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 AWS上的Kubernetes多主群集 - Kubernetes multi-master cluster on AWS 如何在AWS上将kubernetes节点添加到现有集群 - How to add a kubernetes node to the existing cluster on aws 我们可以使用 kops 将配置为在多节点 AWS EC2 K8s 集群上运行的应用程序运行到本地 kubernetes 集群(使用 kubeadm)吗? - Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops into local kubernetes cluster (using kubeadm)? 设置多节点Hadoop Hortonworks集群 - Setting up Multi Node Hadoop Hortonworks Cluster 如何从多节点cassandra集群重新启动一个活动节点? - How to restart one live node from a multi node cassandra cluster? 在AWS托管的kubernetes集群中的KOPS部署主节点上拒绝连接 - Connection refused on KOPS deployed master node in kubernetes cluster hosted on AWS 在AWS EC2上使用h2o进行多节点群集安装 - Multi node cluster installation with h2o on AWS EC2 从多节点Memcached集群AWS中获取不一致的数据 - Getting Inconsistent data from multi-node memcached cluster AWS 无法在AWS EC2上运行多节点集群 - Not able to run multi node cluster on AWS EC2 在多租户 Kubernetes 集群中使用 EFS 文件系统和 EKS 的安全问题 - Security concern using EFS file system with EKS in a multi tenant Kubernetes cluster
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM