简体   繁体   English

如何在 Kubernetes 的 pod 中限制 .net core 应用程序的内存大小?

[英]How to limit memory size for .net core application in pod of kubernetes?

I have a kubernetes cluster with 16Gb RAM on each node我有一个 kubernetes 集群,每个节点上有 16Gb RAM

And a typical dotnet core webapi application以及一个典型的 dotnet core webapi 应用程序

I tried to configure limits like here:我尝试像这样配置限制:

apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 512Mi
    defaultRequest:
      memory: 256Mi
    type: Container

But my app believe that can use 16Gb但我的应用程序相信可以使用 16Gb

Because cat /proc/meminfo | head -n 1因为cat /proc/meminfo | head -n 1 cat /proc/meminfo | head -n 1 returns MemTotal: 16635172 kB (or may be something from cgroups, I'm not sure) cat /proc/meminfo | head -n 1返回MemTotal: 16635172 kB (或者可能来自 cgroups,我不确定)

So.. may be limit does not work?所以..可能是限制不起作用?

No!不! K8s successfully kills my pod when it's reaches memory limit K8s 在达到内存限制时成功杀死了我的 pod

.net core have interesting mode of GC, more details here . .net core 有有趣的 GC 模式, 更多细节在这里 And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory.这是一个很好的模式,但看起来它对 k8s 不起作用,因为应用程序获得了错误的内存信息。 And unlimited pods could get all host memory.无限的 Pod 可以获得所有主机内存。 But with limits - they will die.但是有限制——他们会死。

Now I see two ways:现在我看到两种方法:

  1. Use GC Workstation使用 GC 工作站
  2. Use limits and k8s readness probe : handler will checks current memory on each iteration and call GC.Collect() if current used memory near 80% (I'll pass limit by env variable)使用限制和k8s 就绪探针:处理程序将在每次迭代时检查当前内存,如果当前使用的内存接近 80%,则调用 GC.Collect()(我将通过 env 变量传递限制)

How to limit memory size for .net core application in pod of kubernetes?如何在 Kubernetes 的 pod 中限制 .net core 应用程序的内存大小?

How to rightly set limits of memory for pods in kubernetes?如何在 kubernetes 中正确设置 pod 的内存限制?

You should switch to Workstation GC for optimizing to lower memory usage.您应该切换到 Workstation GC 以进行优化以降低内存使用率。 The readiness probe is not meant for checking memory就绪探针不用于检查内存

In order to properly configure the resource limits you should test your application on a single pod under heavy loads and monitor(eg Prometheus & Grafana) the usage.为了正确配置资源限制,您应该在重负载下在单个 pod 上测试您的应用程序并监控(例如 Prometheus 和 Grafana)使用情况。 For a more in-depth details see this blog post .有关更深入的详细信息,请参阅此博客文章 If you haven't deployed a monitor stack you can at least use kubectl top pods .如果您还没有部署监视器堆栈,您至少可以使用kubectl top pods

If you have found out the breaking points of a single pod you can add the limits to the specific pod like this example below (see Kubernetes Documentation for more examples and details)如果您发现了单个 pod 的断点,您可以像下面的示例一样将限制添加到特定的 pod(有关更多示例和详细信息,请参阅Kubernetes 文档

apiVersion: v1
kind: Pod
metadata:
  name: exmple-pod
spec:
  containers:
  - name: net-core-app
    image: net-code-image
    resources:
      requests:
        memory: 64Mi
        cpu: 250m
      limits:
        memory: 128Mi
        cpu: 500m

The readiness probe is actually meant to be used to tell when a Pod is ready in first place. readiness 探针实际上是用来判断 Pod 何时准备就绪的。 I guess you thought of the liveness probe but that wouldn't be adequate usage because Kubernetes will kill the Pod when it's exceeding it's resource limit and reschedule.我猜你想到了活性探测,但这还不够使用,因为当 Pod 超出资源限制并重新调度时,Kubernetes 会杀死它。

Use the environment variable COMPlus_GCHeapHardLimit使用环境变量COMPlus_GCHeapHardLimit

Documentation https://docs.microsoft.com/en-us/dotnet/api/system.gcmemoryinfo.totalavailablememorybytes?view=net-5.0文档https://docs.microsoft.com/en-us/dotnet/api/system.gcmemoryinfo.totalavailablememorybytes?view=net-5.0

And notice: you should use heximal values并注意:您应该使用十六进制值

It means the value 10000000 is 256MB!这意味着10000000的值是 256MB!

I used docker run command arguments, which can be passed via the deployment yaml, to specify the memory size of the container:我使用 docker run 命令参数(可以通过部署 yaml 传递)来指定容器的内存大小:

args:
  - "--memory=124m --memory-swap=124m"

This way the .net GC 'sees' that only 124MB is available.这样 .net GC“看到”只有 124MB 可用。

The args specifier is on the same level as ports and name under the containers specifier: args 说明符与容器说明符下的端口和名称处于同一级别:

  containers:
    - name: xxx
      ports:
      ....
      args:
        - "--memory=124m --memory-swap=124m"

A description about the arguments '--memory' and '--memory-swap' can be found herehttps://docs.docker.com/config/containers/resource_constraints/关于参数“--memory”和“--memory-swap”的描述可以在这里找到https://docs.docker.com/config/containers/resource_constraints/

-m or --memory= The maximum amount of memory the container can use. -m 或 --memory= 容器可以使用的最大内存量。 If you set this option, the minimum allowed value is 6m (6 megabyte).如果设置此选项,则最小允许值为 6m(6 兆字节)。

--memory-swap* The amount of memory this container is allowed to swap to disk. --memory-swap* 允许此容器交换到磁盘的内存量。 See --memory-swap details.请参阅 --memory-swap 详细信息。

More details about passing arguments to the run command can be found here: How to pass docker run flags via kubernetes pod可以在此处找到有关将参数传递给 run 命令的更多详细信息: How to pass docker run flags via kubernetes pod

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM