简体   繁体   中英

Resource Allocation in Kubernetes: How are pods scheduled?

In Kubernetes, the role of the scheduler is to seek a suitable node for the pods. So, after assigning a pod into a node, there are different pods on that node so that those pods are competing to gain resources. Therefore, for this competitive situation, how Kubernetes allocates resource? Is there any source code in Kubernetes for computing resource allocation?

I suppose you can take a look at the below articles to see if that answers your query

https://github.com/kubernetes/community/blob/master/contributors/devel/sig-scheduling/scheduler_algorithm.md#ranking-the-nodes

https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/

The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, priorityFunc1 and priorityFunc2 with weighting factors weight1 and weight2 respectively, the final score of some NodeA is:

finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2)

After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen.

Currently, Kubernetes scheduler provides some practical priority functions, including:

LeastRequestedPriority : The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption.

CalculateNodeLabelPriority : Prefer nodes that have the specified label.

BalancedResourceAllocation : This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed.

CalculateSpreadPriority : Spread Pods by minimizing the number of Pods belonging to the same service on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes.

CalculateAntiAffinityPriority : Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM