[英]Kubernetes memory limit : Containers with and without memory limit on same pod
I have two containers withing my pod ( PodA
)我的 pod 有两个容器(
PodA
)
The first container ( C1
) has the following limits第一个容器 (
C1
) 具有以下限制
Limits:
cpu: 2
memory: 1Gi Requests:
cpu: 100m
memory: 128Mi
The second container ( C2
) has no requests/limits specified第二个容器 (
C2
) 没有指定请求/限制
I have the following questions我有以下问题
kubectl describe nodes
, the memory/cpu request/limits for the PodA
are the same as the one from C1
.kubectl describe nodes
可以看出, PodA
的内存/cpu 请求/限制与C1
中的相同。 Is that correct?C2
? C2
的内存/cpu 限制是多少? Is it unbounded?PodA
(eg limits of C1
)?PodA
的限制(例如C1
的限制)?C2
asks for more than 1Gi of memory? C2
要求的 memory 超过 1Gi,会发生什么? Will the container run out of memory, and cause the whole pod to crash? I tried to google, but all the examples I saw where some where resource limits are set for both of the containers我试图用谷歌搜索,但我看到的所有示例都为两个容器设置了资源限制
Kubernetes places your pods in Quality Of Service classes based on whether you have added requests and limits. Kubernetes 根据您是否添加了请求和限制将您的 Pod 置于服务质量类中。
If all your containers in the pods have limits set, the pod falls under Guaranteed
class.如果 pod 中的所有容器都设置了限制,则 pod 属于
Guaranteed
class。
If at least on container in the pod has requests(or limits) set, the pod comes under Burstable
class.如果 pod 中的至少一个容器设置了请求(或限制),则该 pod 属于
Burstable
class。
If there are no requests or limits set for all container, the pods comes under Best Effort
class.如果没有为所有容器设置请求或限制,则 Pod 将属于
Best Effort
class。
In your example, your pod falls under Burstable
class because C2
does not have limits set.在您的示例中,您的 pod 属于
Burstable
class 因为C2
没有设置限制。
These requests and limits are used in two contexts - scheduling and resource exhaustion.这些请求和限制用于两种情况 - 调度和资源耗尽。
During scheduling, requests are considered to select node based on available resources.在调度过程中,根据可用资源考虑对 select 节点的请求。
limits
can be over-comitted and are not considered for scheduling`decisions. limits
可能会被过度使用,并且不会被考虑用于调度决策。
There are two resources on which you can specify the requests and limits natively - cpu and memory您可以在两个资源上指定请求和限制 - cpu 和 memory
CPU is a compressible resource ie, kernel can throttle cpu usage of a process if required by allocating less CPU time. CPU 是一种可压缩资源,即 kernel 可以根据需要通过分配更少的 CPU 时间来限制进程的 CPU 使用率。 So a process is allowed to use as much CPU as it wants if other processes are idle.
因此,如果其他进程处于空闲状态,则允许一个进程使用尽可能多的 CPU。 If another process needs the cpu, OS can just throttle the cpu time for the process using more CPU.
如果另一个进程需要 cpu,操作系统可以为使用更多 CPU 的进程限制 cpu 时间。 The unused cpu time will be split in the ratio of their requests.
未使用的 cpu 时间将按其请求的比例进行分配。 If you don't want this behaviour of unlimited cpu usage ie, you want your container to not cross certain threshold, you can set the limit.
如果您不希望这种无限 cpu 使用的行为,即您希望您的容器不超过某个阈值,您可以设置限制。
Memory is not a compressible resource. Memory 不是可压缩资源。 Once allocated to a process, kernel cannot regain the memory.
一旦分配给一个进程,kernel 就无法重新获得 memory。 So if a limit is set, a process gets OOM killed if it tries to use more than the limit.
因此,如果设置了限制,如果进程尝试使用超过限制,则会被 OOM 杀死。 If no limit is set, process can allocate as much as it wants but if there is a memory exhaustion, the only way to regain some free memory is to kill a process.
如果没有设置限制,进程可以根据需要分配任意数量,但如果 memory 耗尽,重新获得一些空闲 memory 的唯一方法是终止进程。 This is where the QoS class come into picture.
这就是 QoS class 出现的地方。 A
BestEffort
class container would be the first in line to get OOM killed. BestEffort
容器将是第一个被 OOM 杀死的容器。 Next Burstable
class containers would be killed before any Guaranteed
class container gets killed.下一个
Burstable
class 容器将在任何Guaranteed
class 容器被杀死之前被杀死。 In situations where the containers are of same QoS class, the container using higher percentage of memory compared to its request would be OOM killed.在容器具有相同 QoS class 的情况下,与其请求相比,使用更高百分比的 memory 的容器将被 OOM 杀死。
From what I can see with a kubectl describe nodes, the memory/cpu request/limits for the PodA are the same as the one from C1.
从我通过 kubectl describe 节点看到的情况来看,PodA 的内存/cpu 请求/限制与 C1 中的相同。 Is that correct?
那是对的吗?
Yes是的
What are the memory/cpu limits for C2?
C2 的内存/cpu 限制是多少? Is it unbounded?
是无界的吗? Limited to the limits of PodA (eg limits of C1)?
受限于 PodA 的限制(例如 C1 的限制)?
CPU as a compressible resource is unbounded for all containers(or upto the limit if the limit is specified). CPU 作为可压缩资源对于所有容器都是无限制的(如果指定了限制,则可以达到限制)。 C2 would get throttled when the other containers with requests set needs more cpu time.
当具有请求集的其他容器需要更多 cpu 时间时,C2 将受到限制。
Follow up of #2 -> What happens if C2 asks for more than 1Gi of memory?
#2 的跟进 -> 如果 C2 要求的 memory 超过 1Gi,会发生什么? Will the container run out of memory, and cause the whole pod to crash?
容器会不会用完memory,导致整个pod崩溃? Or will it be able to grab more memory, as long as the node has free memory?
或者只要节点有空闲的memory,就可以抢到更多的memory?
It can grab as much memory it wants.它可以抓取尽可能多的 memory。 But it would be the first to get OOM killed if the nodes has no more free memory to allocate to other processes.
但是如果节点没有更多的空闲 memory 可以分配给其他进程,它将是第一个被 OOM 杀死的节点。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.