[英]Kubernetes - what happens if you don't set a pod CPU request or limit?
I understand the concept of setting a request
or a limit
on a Kubernetes pod for both CPU
and/or memory
resources but I'm trying to understand what happens if you don't set either request
or limit
for say a CPU?我理解为
CPU
和/或memory
资源在 Kubernetes pod 上设置request
或limit
的概念,但我想了解如果您不为 CPU 设置request
或limit
会发生什么?
We have configured an NGINX pod but it doesn't have either a request
or limit
set for its CPU
.我们已经配置了一个 NGINX pod,但它没有为其
CPU
设置request
或limit
。 I'm assuming it will have at a minimum 1 millicore
and will give as much millicores to that pod as it needs and is available on the node.我假设它至少有
1 millicore
,并且会根据需要为该 pod 提供尽可能多的毫核,并且在节点上可用。 If the node has exhausted all available cores, then does it just stay stuck at 1 millicore?如果节点用尽了所有可用的内核,那么它会停留在 1 毫内核吗?
what happens if you don't set either request or limit for say a CPU?如果您没有为 CPU 设置请求或限制,会发生什么?
When you don't specify a request for CPU, you're saying you don't care how much CPU time the process running in your container is allotted.
当您没有指定 CPU 请求时,您是在说您不关心容器中运行的进程分配了多少 CPU 时间。
In the worst case, it may not get any CPU time at all (this happens when a heavy demand by other processes exists on the CPU).
在最坏的情况下,它可能根本得不到任何 CPU 时间(当 CPU 上存在其他进程的大量需求时会发生这种情况)。 Although this may be fine for low-priority batch jobs, which aren't time-critical, it obviously isn't appropriate for containers handling user requests.
尽管这对于时间要求不高的低优先级批处理作业可能很好,但显然不适合处理用户请求的容器。
you're also requesting
1 millicore
of memory for the container.您还为容器请求
1 millicore
memory。 By doing that, you're saying that you expect the processes running inside the container to use at mostN mebibytes
of RAM.通过这样做,您是说您希望在容器内运行的进程最多使用
N mebibytes
的 RAM。 They might use less, but you're not expecting them to use more than that in normal circumstances.他们可能会使用更少,但您不会期望他们在正常情况下使用更多。
Understanding how resource requests affect scheduling了解资源请求如何影响调度
By specifying resource requests, you're specifying the minimum amount of resources your pod needs.
通过指定资源请求,您可以指定 pod 所需的最小资源量。 This information is what the Scheduler uses when scheduling the pod to a node.
此信息是调度程序在将 pod 调度到节点时使用的信息。
Each node has a certain amount of CPU and memory it can allocate to pods.
每个节点都有一定数量的 CPU 和 memory 可以分配给 pod。 When scheduling a pod, the Scheduler will only consider nodes with enough unallocated resources to meet the pod's resource requirements.
在调度 Pod 时,Scheduler 只会考虑具有足够未分配资源的节点来满足 Pod 的资源需求。
If the amount of unallocated CPU or memory is less than what the pod requests, Kubernetes will not schedule the pod to that node, because the node can't provide the minimum amount required by the pod.
如果未分配的 CPU 或 memory 的数量小于 pod 请求的数量,Kubernetes 将不会将 pod 调度到该节点,因为该节点无法提供该 pod 所需的最小数量。
Understanding what will happened if Exceeding the limits了解如果超出限制会发生什么
With CPU带 CPU
CPU is a compressible resource, and it's only natural for a process to want to consume all of the CPU time when not waiting for an I/O operation.
CPU 是一种可压缩资源,进程在不等待 I/O 操作时想要消耗所有 CPU 时间是很自然的。
a process' CPU usage is throttled, so when a CPU limit is set for a container, the process isn't given more CPU time than the configured limit.
进程的 CPU 使用率受到限制,因此当为容器设置 CPU 限制时,不会为进程分配比配置限制更多的 CPU 时间。
With Memory带 Memory
With memory, it's different.
与 memory 不同。 When a process tries to allocate memory over its limit, the process is killed (it's said the container is
OOMKilled
, whereOOM
stands for Out Of Memory).当一个进程试图分配 memory 超过其限制时,该进程被杀死(据说容器是
OOMKilled
,其中OOM
代表内存不足)。 If the pod's restart policy is set to Always orOnFailure
, the process is restarted immediately, so you may not even notice it getting killed.如果 pod 的重启策略设置为 Always 或
OnFailure
,进程会立即重启,所以你甚至可能不会注意到它被杀死了。 But if it keeps going over the memory limit and getting killed, Kubernetes will begin restarting it with increasing delays between restarts.但是,如果它继续超过 memory 限制并被杀死,Kubernetes 将开始重新启动它,并且重新启动之间的延迟会增加。 You'll see a
CrashLoopBackOff
status in that case.在这种情况下,您会看到
CrashLoopBackOff
状态。
kubectl get po
NAME READY STATUS RESTARTS AGE
memoryhog 0/1 CrashLoopBackOff 3 1m
Note: The CrashLoopBackOff
status doesn't mean the Kubelet has given up.注意:
CrashLoopBackOff
状态并不意味着 Kubelet 已经放弃。 It means that after each crash, the Kubelet is increasing the time period before restarting the container.这意味着每次崩溃后,Kubelet 都会增加重新启动容器之前的时间段。
Understand To examine why the container crashed了解检查容器崩溃的原因
kubectl describe pod
Name:
...
Containers:
main: ...
State: Terminated
Reason: OOMKilled
...
Pay attention to the Reason attribute
OOMKilled
.注意原因属性
OOMKilled
。 The current container was killed because it was out of memory (OOM).当前容器被杀死,因为它超出了 memory (OOM)。
I understand the concept of setting a request
or a limit
on a Kubernetes pod for both CPU
and/or memory
resources but I'm trying to understand what happens if you don't set either request
or limit
for say a CPU?我理解为
CPU
和/或memory
资源在 Kubernetes pod 上设置request
或limit
的概念,但我想了解如果您不为 CPU 设置request
或limit
会发生什么?
We have configured an NGINX pod but it doesn't have either a request
or limit
set for its CPU
.我们已经配置了一个 NGINX pod,但它没有为其
CPU
设置request
或limit
。 I'm assuming it will have at a minimum 1 millicore
and will give as much millicores to that pod as it needs and is available on the node.我假设它至少有
1 millicore
,并且会根据需要为该 pod 提供尽可能多的毫核,并且在节点上可用。 If the node has exhausted all available cores, then does it just stay stuck at 1 millicore?如果节点用尽了所有可用的内核,那么它会停留在 1 毫内核吗?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.