简体   繁体   中英

How to set an accurate CPU unit for a container in a pod based on Docker stat data

I used to run a Docker container on a Linux machine. Now, I want to move the same container to Kubernetes. Its going to be a single container in a pod. I want to set CPU resource requests and limits for my container. The only data I have is about when it was running on Docker. I used Docker stats to find CPU usage. Lets say on average the command is showing 0.07% for my container. At bursts, it was showing 0.09% .

Based on CPU resource units how can I translate the numbers to milliCPU to be used in my pod manifest?

....
resources:
  requests:
    cpu:?
  limits:
    cpu:?

Regarding the limits you need to consider, CPU and memory are each a resource type. A resource type has a base unit. CPU represents compute processing and is specified in units of Kubernetes CPUs . Memory is specified in units of bytes. For Linux workloads, you can specify huge page resources. Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory that are much larger than the default page size.

For example, on a system where the default page size is 4KiB, you could specify a limit, hugepages-2Mi: 80Mi. If the container tries allocating over 40 2MiB huge pages (a total of 80 MiB), that allocation fails.

Consider the limits and requests for CPU resources are measured in CPU units. In Kubernetes, 1 CPU unit is equivalent to 1 physical CPU core, or 1 virtual core, depending on whether the node is a physical host or a virtual machine running inside a physical machine.

As reference, let's consider the following example:

$ kubectl describe limits mylimits --namespace=limit-example

Name:   mylimits
Namespace:  limit-example
Type        Resource      Min      Max      Default Request      Default Limit      Max Limit/Request Ratio
----        --------      ---      ---      ---------------      -------------      -----------------------
Pod         cpu           200m     2        -                    -                  -
Pod         memory        6Mi      1Gi      -                    -                  -
Container   cpu           100m     2        200m                 300m               -
Container   memory        3Mi      1Gi      100Mi                200Mi              

In this scenario, the following limits were specified:

  1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit must be specified for that resource across all containers. Failure to specify a limit will result in a validation error when attempting to create the pod. Note that a default value of limit is set by default in file limits.yaml (300m CPU and 200Mi memory).

  2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a request must be specified for that resource across all containers. Failure to specify a request will result in a validation error when attempting to create the pod. Note that a default value of request is set by defaultRequest in file limits.yaml (200m CPU and 100Mi memory).

  3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all containers CPU limits must be <= 2.

In conclusion, you need to monitorize running the docker container performance metrics at the CPU and Memory level according to the host capacity, and then establish the limits of deployment. Here is a study case with a similar scenario for further reference.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM