简体   繁体   English

Kubernetes中请求的确切用途是什么

[英]What is the Exact use of requests in Kubernetes

I'm confused with the relationship between two parameters: requests and cpu.shares of the cgroup which is updated once the Pod is deployed.我对两个参数之间的关系感到困惑: requests和 cgroup 的cpu.shares ,一旦部署 Pod,就会更新。 According the readings I've done so far, cpu.shares reflects some kind of priority when trying to get the chance to consume the CPU.根据我到目前为止所做的读数, cpu.shares在尝试获得消耗 CPU 的机会时反映了某种优先级。 And it's a relative value.这是一个相对值。

So my question why kubernetes considers the request value of the CPU as an absolute value when scheduling?所以我的问题是为什么kubernetes在调度的时候会考虑CPU的request值是一个绝对值? When it comes to the CPU processes will get a time slice to get executed based on their priorities (according to the CFS mechanism).当涉及到 CPU 进程时,将根据它们的优先级(根据 CFS 机制)获得一个时间片来执行。 To my knowledge, there's no such thing called giving such amounts of CPUs (1CPU, 2CPUs etc.).据我所知,没有所谓的提供如此数量的 CPU(1CPU、2CPU 等)。 So, if the cpu.share value is considered to prioritize the tasks, why kubernetes consider the exact request value (Eg: 1500m, 200m) to find out a node?那么,如果考虑cpu.share值来确定任务的优先级,为什么 kubernetes 会考虑确切的请求值(例如:1500m、200m)来找出节点?

Please correct me if I've got this wrong.如果我错了,请纠正我。 Thanks !!谢谢 !!

Answering your questions from the main question and comments:从主要问题和评论中回答您的问题:

So my question why kubernetes considers the request value of the CPU as an absolute value when scheduling?所以我的问题是为什么kubernetes在调度的时候会考虑CPU的request值是一个绝对值?

To my knowledge, there's no such thing called giving such amounts of CPUs (1CPU, 2CPUs etc.).据我所知,没有所谓的提供如此数量的 CPU(1CPU、2CPU 等)。 So, if the cpu.share value is considered to prioritize the tasks, why kubernetes consider the exact request value (Eg: 1500m, 200m) to find out a node?那么,如果考虑cpu.share值来确定任务的优先级,为什么 kubernetes 会考虑确切的请求值(例如:1500m、200m)来找出节点?

It's because decimal CPU values from the requests are always converted to the values in milicores, like 0.1 is equal to 100m which can be read as "one hundred millicpu" or "one hundred millicores" .这是因为请求中的十进制 CPU 值总是被转换为以毫秒为单位的值,例如 0.1 等于 100m,可以读作 "一百毫普" 或 "一百毫核" Those units are specific for Kubernetes:这些单位特定于 Kubernetes:

Fractional requests are allowed.允许小数请求。 A Container with spec.containers[].resources.requests.cpu of 0.5 is guaranteed half as much CPU as one that asks for 1 CPU. spec.containers[].resources.requests.cpu0.5的容器保证 CPU 是要求 1 个 CPU 的容器的一半。 The expression 0.1 is equivalent to the expression 100m , which can be read as "one hundred millicpu".表达式0.1等价于表达式100m ,可以读作“一百毫普”。 Some people say "one hundred millicores", and this is understood to mean the same thing.有人说“一百毫核”,这被理解为同一个意思。 A request with a decimal point, like 0.1 , is converted to 100m by the API, and precision finer than 1m is not allowed. API 将带小数点的请求(如0.1 )转换为100m ,不允许小于1m的精度。 For this reason, the form 100m might be preferred.出于这个原因,可能首选100m形式。

CPU is always requested as an absolute quantity, never as a relative quantity; CPU 总是作为绝对数量被请求,而不是相对数量; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine. 0.1 与单核、双核或 48 核机器上的 CPU 数量相同。

Based on the above one, remember that you can specify to use let's say 1.5 CPU of the node by specifying cpu: 1.5 or cpu: 1500m .基于上述,请记住,您可以通过指定cpu: 1.5cpu: 1500m来指定使用节点的 1.5 CPU。

Just wanna know lowering the cpu.share value in cgroups (which is modified by k8s after the deployment) affects to the cpu power consume by the process.只是想知道降低 cgroups 中的cpu.share值(在部署后由 k8s 修改)会影响进程的 cpu 功耗。 For an instance, assume that A, B containers have 1024, 2048 shares allocated.例如,假设 A、B 容器分配了 1024、2048 个共享。 So the available resources will be split into 1:2 ratio.因此可用资源将被拆分为 1:2 的比例。 So would it be the same as if we configure cpu.share as 10, 20 for two containers.那么如果我们为两个容器配置 cpu.share 为 10, 20 会不会一样。 Still the ratio is 1:2仍然是1:2的比例

Let's make it clear - it's true that the ratio is the same, but the values are different.让我们说清楚 - 比率是相同的,但值是不同的。 1024 and 2048 in cpu.shares means cpu: 1000m and cpu: 2000m defined in Kubernetes resources, while 10 and 20 means cpu: 10m and cpu: 20m . cpu.shares 中的 1024 和 2048 表示在cpu.shares资源中定义的cpu: 1000mcpu: 2000m ,而 10 和 20 表示cpu: 10mcpu: 20m

Let's say the cluster nodes are based on Linux OS.假设集群节点基于 Linux 操作系统。 So, how kubernetes ensure that request value is given to a container?那么,kubernetes 如何确保将请求值赋予容器? Ultimately, OS will use configurations available in a cgroup to allocate resource, right?最终,操作系统将使用 cgroup 中可用的配置来分配资源,对吗? It modifies the cpu.shares value of the cgroup.它修改 cgroup 的cpu.shares值。 So my question is, which files is modified by k8s to tell operating system to give 100m or 200m to a container?所以我的问题是,k8s 修改了哪些文件来告诉操作系统给容器100m200m

Yes, your thinking is correct.是的,你的想法是正确的。 Let me explain in more detail.让我更详细地解释一下。

Generally on the Kubernetes node there are three cgroups under the root cgroup , named as slices :一般在 Kubernetes 节点上,根 cgroup 下有 3 个 cgroup ,命名为slices

The k8s uses cpu.share file to allocate the CPU resources. k8s 使用cpu.share文件来分配 CPU 资源。 In this case, the root cgroup inherits 4096 CPU shares, which are 100% of available CPU power(1 core = 1024; this is fixed value).在这种情况下,根 cgroup 继承了 4096 个 CPU 份额,即 100% 的可用 CPU 功率(1 个核心 = 1024;这是固定值)。 The root cgroup allocate its share proportionally based on children's cpu.share and they do the same with their children and so on.根 cgroup 根据孩子的cpu.share按比例分配其份额,他们对孩子做同样的事情等等。 In typical Kubernetes nodes, there are three cgroup under the root cgroup, namely system.slice , user.slice , and kubepods .在典型的 Kubernetes 节点中,根 cgroup 下有三个 cgroup,分别是system.sliceuser.slicekubepods The first two are used to allocate the resource for critical system workloads and non-k8s user space programs.前两个用于为关键系统工作负载和非 k8s 用户空间程序分配资源。 The last one, kubepods is created by k8s to allocate the resource to pods.最后一个, kubepods是由 k8s 创建的,用于将资源分配给 Pod。

To check which files are modified we need to go to the /sys/fs/cgroup/cpu directory.要检查哪些文件被修改,我们需要 go 到/sys/fs/cgroup/cpu目录。 Here we can find directory called kubepods (which is one of the above mentioned slices ) where all cpu.shares files for pods are here. 在这里,我们可以找到名为kubepods的目录(它是上述切片之一),其中所有 pod 的cpu.shares文件都在这里。 In kubepods directory we can find two other folders - besteffort and burstable .kubepods目录中,我们可以找到另外两个文件夹 - besteffortburstable Here is worth mentioning that Kubernetes have a three QoS classes :这里值得一提的是 Kubernetes 有三个QoS 等级

Each pod has an assigned QoS class and depending on which class it is, the pod is located in the corresponding directory (except guaranteed, pod with this class is created in kubepods directory).每个 pod 都有一个分配的 QoS class,根据它是哪个 class,pod 位于相应的目录中(除了保证,具有此 class 的 pod 是在kubepods目录中创建的)。

For example, I'm creating a pod with following definition:例如,我正在创建一个具有以下定义的 pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
spec:
  selector:
    matchLabels:
      app: test-deployment
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: test-deployment
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 300m
      - name: busybox
        image: busybox
        args:
        - sleep
        - "999999"
        resources:
          requests:
            cpu: 150m

Based on earlier mentioned definitions, this pod will have assigned Qos class Burstable , thus it will be created in the /sys/fs/cgroup/cpu/kubepods/burstable directory.根据前面提到的定义,这个 pod 将分配 Qos class Burstable ,因此它将在/sys/fs/cgroup/cpu/kubepods/burstable目录中创建。

Now we can check cpu.shares set for this pod:现在我们可以检查这个 pod 的cpu.shares集:

user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90 $ cat cpu.shares
460

It is correct as one pod is taking 300m and the second one 150m and it's calculated by multiplying 1024 .这是正确的,因为一个吊舱占用 300m,第二个吊舱占用 150m,它是通过乘以 1024 计算得出的 For each container we have sub-directories as well:对于每个容器,我们也有子目录:

user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90/fa6194cbda0ccd0b1dc77793bfbff608064aa576a5a83a2f1c5c741de8cf019a $ cat cpu.shares
153
user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90/d5ba592186874637d703544ceb6f270939733f6292e1fea7435dd55b6f3f1829 $ cat cpu.shares
307

If you want to read more about Kubrenetes CPU management, I'd recommend reading following:如果您想了解更多有关 Kubrenetes CPU 管理的信息,我建议您阅读以下内容:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM