简体   繁体   中英

How to limit memory size for .net core application in pod of kubernetes?

I have a kubernetes cluster with 16Gb RAM on each node

And a typical dotnet core webapi application

I tried to configure limits like here:

apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 512Mi
    defaultRequest:
      memory: 256Mi
    type: Container

But my app believe that can use 16Gb

Because cat /proc/meminfo | head -n 1 cat /proc/meminfo | head -n 1 returns MemTotal: 16635172 kB (or may be something from cgroups, I'm not sure)

So.. may be limit does not work?

No! K8s successfully kills my pod when it's reaches memory limit

.net core have interesting mode of GC, more details here . And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory. And unlimited pods could get all host memory. But with limits - they will die.

Now I see two ways:

  1. Use GC Workstation
  2. Use limits and k8s readness probe : handler will checks current memory on each iteration and call GC.Collect() if current used memory near 80% (I'll pass limit by env variable)

How to limit memory size for .net core application in pod of kubernetes?

How to rightly set limits of memory for pods in kubernetes?

You should switch to Workstation GC for optimizing to lower memory usage. The readiness probe is not meant for checking memory

In order to properly configure the resource limits you should test your application on a single pod under heavy loads and monitor(eg Prometheus & Grafana) the usage. For a more in-depth details see this blog post . If you haven't deployed a monitor stack you can at least use kubectl top pods .

If you have found out the breaking points of a single pod you can add the limits to the specific pod like this example below (see Kubernetes Documentation for more examples and details)

apiVersion: v1
kind: Pod
metadata:
  name: exmple-pod
spec:
  containers:
  - name: net-core-app
    image: net-code-image
    resources:
      requests:
        memory: 64Mi
        cpu: 250m
      limits:
        memory: 128Mi
        cpu: 500m

The readiness probe is actually meant to be used to tell when a Pod is ready in first place. I guess you thought of the liveness probe but that wouldn't be adequate usage because Kubernetes will kill the Pod when it's exceeding it's resource limit and reschedule.

Use the environment variable COMPlus_GCHeapHardLimit

Documentation https://docs.microsoft.com/en-us/dotnet/api/system.gcmemoryinfo.totalavailablememorybytes?view=net-5.0

And notice: you should use heximal values

It means the value 10000000 is 256MB!

I used docker run command arguments, which can be passed via the deployment yaml, to specify the memory size of the container:

args:
  - "--memory=124m --memory-swap=124m"

This way the .net GC 'sees' that only 124MB is available.

The args specifier is on the same level as ports and name under the containers specifier:

  containers:
    - name: xxx
      ports:
      ....
      args:
        - "--memory=124m --memory-swap=124m"

A description about the arguments '--memory' and '--memory-swap' can be found herehttps://docs.docker.com/config/containers/resource_constraints/

-m or --memory= The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 6m (6 megabyte).

--memory-swap* The amount of memory this container is allowed to swap to disk. See --memory-swap details.

More details about passing arguments to the run command can be found here: How to pass docker run flags via kubernetes pod

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM