简体   繁体   English

从 Kubernetes 退出的 Pod 导致 Pod memory 使用量激增

[英]Pod exiting from Kubernetes causes spike in Pod memory usage

We have this python image running a Sanic server with a simple entrypoint:我们有这个python镜像运行一个带有简单入口点的 Sanic 服务器:

ENTRYPOINT ["python3.9", "entrypoint.py"]

All of our orchestration is managed by Kubernetes.我们所有的编排都由 Kubernetes 管理。

Whenever a pod is deleted, the pod exits with a spike in memory usage, alerting our Grafana dashboards每当一个 pod 被删除时,该 pod 就会退出,并且 memory 的使用量会激增,从而提醒我们的 Grafana 仪表板

How can I debug this?我该如何调试呢?

its probably because the python process is doing something on exit thats eating ram.这可能是因为 python 进程在退出时正在吃内存。 As a work around you can change grafana to a average ram usage across a time range to smooth out that spike作为一种解决方法,您可以将 grafana 更改为一个时间范围内的平均 ram 使用量,以消除该峰值

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM