简体   繁体   English

为 Kubernetes 中的 Pod 分配或限制资源?

[英]Allocate or Limit resource for pods in Kubernetes?

The resource limit of Pod has been set as: Pod 的资源限制设置为:

resource
  limit
    cpu: 500m
    memory: 5Gi

and there's 10G mem left on the node.节点上还剩下10G内存。

I've created 5 pods in a short time successfully, and the node maybe still have some mem left, eg 8G .我在短时间内成功创建了5 pod,节点可能还有一些内存,例如8G

The mem usage is growing as the time goes on, and reach the limit ( 5G x 5 = 25G > 10G ), then the node will be out of response.随着时间的推移,内存使用量越来越大,达到极限( 5G x 5 = 25G > 10G ),节点就会失去响应。

In order to ensure the usability, is there a way to set the resource limit on the node?为了保证可用性,有没有办法在节点上设置资源限制?

Update更新

The core problem is that pod memory usage does not always equal to the limit, especially in the time when it just starts.核心问题是pod内存使用并不总是等于limit,尤其是在刚启动的时候。 So there can be unlimited pods created as soon as possible, then make all nodes full load.因此可以尽快创建无限的 pod,然后使所有节点满载。 That's not good.这不好。 There might be something to allocate resources rather than setting the limit.可能需要分配资源而不是设置限制。

Update 2更新 2

I've tested again for the limits and resources:我再次测试了限制和资源:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 5Gi

The total mem is 15G and left 14G, but 3 pods are scheduled and running successfully:总内存是 15G 还剩 14G,但是有 3 个 pod 被调度并成功运行:

> free -mh
              total        used        free      shared  buff/cache   available
Mem:            15G        1.1G        8.3G        3.4M        6.2G         14G
Swap:            0B          0B          0B

> docker stats

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O
44eaa3e2d68c        0.63%               1.939 GB / 5.369 GB   36.11%              0 B / 0 B           47.84 MB / 0 B
87099000037c        0.58%               2.187 GB / 5.369 GB   40.74%              0 B / 0 B           48.01 MB / 0 B
d5954ab37642        0.58%               1.936 GB / 5.369 GB   36.07%              0 B / 0 B           47.81 MB / 0 B

It seems that the node will be crushed soon XD看来节点很快就要被压垮了XD

Update 3更新 3

Now I change the resources limits, request 5G and limit 8G :现在我更改资源限制,请求5G并限制8G

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 8Gi

The results are:结果是: 在此处输入图片说明

According to the k8s source code about the resource check :根据关于资源检查k8s源代码

在此处输入图片说明

The total memory is only 15G , and all the pods needs 24G , so all the pods may be killed.总内存只有15G ,所有的 pod 需要24G ,所以可能会杀掉所有的 pod。 (my single one container will cost more than 16G usually if not limited.) (如果没有限制,我的单个容器通常会花费超过16G 。)

It means that you'd better keep the requests exactly equals to the limits in order to avoid pod killed or node crush.这意味着您最好保持requestslimits完全相同,以避免 pod 被杀死或节点崩溃。 As if the requests value is not specified, it will be set to the limit as default , so what exactly requests used for?就像没有指定requests值一样,它会被设置为默认的limit ,那么requests到底是用来做什么的呢? I think only limits is totally enough, or IMO, on the contrary of what K8s claimed, I rather like to set the resource request greater than the limit , in order to ensure the usability of nodes.我认为只有limits就完全足够了,或者说IMO,与K8s声称的相反,我更喜欢将资源请求设置为大于限制,以确保节点的可用性。

Update 4更新 4

Kubernetes 1.1 schedule the pods mem requests via the formula: Kubernetes 1.1通过以下公式调度 pods mem 请求

(capacity - memoryRequested) >= podRequest.memory

It seems that kubernetes is not caring about memory usage as Vishnu Kannan said.似乎 kubernetes 并不像Vishnu Kannan所说的那样关心内存使用情况。 So the node will be crushed if the mem used much by other apps.因此,如果其他应用程序大量使用该内存,则该节点将被粉碎。

Fortunately, from the commit e64fe822 , the formula has been changed as:幸运的是,从提交e64fe822 开始,公式已更改为:

(allocatable - memoryRequested) >= podRequest.memory

waiting for the k8s v1.2!等待 k8s v1.2!

Kubernetes resource specifications have two fields, request and limit . Kubernetes 资源规范有两个字段, requestlimit

limits place a cap on how much of a resource a container can use. limits了容器可以使用的资源量。 For memory, if a container goes above its limits, it will be OOM killed.对于内存,如果容器超出其限制,它将被 OOM 杀死。 For CPU, its usage may be throttled.对于 CPU,它的使用可能会受到限制。

requests are different in that they ensure the node that the pod is put on has at least that much capacity available for it. requests的不同之处在于,它们确保放置 Pod 的节点至少有足够的可用容量。 If you want to make sure that your pods will be able to grow to a particular size without the node running out of resources, specify a request of that size.如果您想确保您的 pod 能够增长到特定大小而节点不会耗尽资源,请指定该大小的请求。 This will limit how many pods you can schedule, though -- a 10G node will only be able to fit 2 pods with a 5G memory request.不过,这将限制您可以安排的 pod 数量——10G 节点只能容纳 2 个具有 5G 内存请求的 pod。

Kubernetes supports Quality of Service. Kubernetes 支持服务质量。 If your Pods have limits set, they belong to the Guaranteed class and the likelihood of them getting killed due to system memory pressure is extremely low.如果您的 Pod 设置了limits ,则它们属于Guaranteed类,并且它们由于系统内存压力而被杀死的可能性极低。 If the docker daemon or some other daemon you run on the node consumes a lot of memory, that's when there is a possibility for Guaranteed Pods to get killed.如果您在节点上运行的 docker 守护程序或其他一些守护程序消耗了大量内存,那么有保证的 Pod 可能会被杀死。

The Kube scheduler does take into account memory capacity and memory allocated while scheduling. Kube 调度器确实考虑了调度时分配的内存容量和内存。 For instance, you cannot schedule more than two pods each requesting 5 GB on a 10GB node.例如,您不能在 10GB 节点上安排两个以上的 Pod,每个 Pod 请求 5GB。

Memory usage is not consumed by Kubernetes as of now for the purposes of scheduling.到目前为止,Kubernetes 不会出于调度目的消耗内存使用量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM