简体   繁体   English

K8S Pod OOM 因明显的 memory 泄漏而被杀死,memory go 在哪里?

[英]K8S Pod OOM killed with apparent memory leak, where did the memory go?

I have an issue with a K8S POD getting OOM killed, but with some weird conditions and observations.我遇到 K8S POD 被 OOM 杀死的问题,但有一些奇怪的情况和观察结果。

The pod is a golang 1.15.6 based REST service, running on X86 64 bit architecture. pod 是一个基于 golang 1.15.6 的 REST 服务,运行在 X86 64 位架构上。 When the pod runs on VM based clusters, everything is fine, the service behaves normally.当 pod 在基于 VM 的集群上运行时,一切正常,服务运行正常。 When the service runs on nodes provisioned directly on hardware, it appears to experience a memory leak and ends up getting OOMed.当服务在直接在硬件上配置的节点上运行时,它似乎会遇到 memory 泄漏并最终出现 OOMed。

Observations are that when running on the problematic configuration, "kubectl top pod" will report continually increasing memory utilization until the defined limit (64MiB) is reached, at which time OOM killer is invoked.观察发现,当在有问题的配置上运行时,“kubectl top pod”将报告 memory 利用率不断增加,直到达到定义的限制 (64MiB),此时 OOM killer 被调用。

Observations from inside the pod using "top" suggest that memory usage of the various processes inside the pod are stable, using around 40MiB RSS.使用“top”从 pod 内部进行的观察表明 pod 内部各种进程的 memory 使用情况稳定,使用大约 40MiB RSS。 The values for VIRT,RES,SHR as reported by top remain stable over time, with only minor fluctuations. top 报告的 VIRT、RES、SHR 的值随着时间的推移保持稳定,只有很小的波动。

I've analyzed the golang code extensively, including obtaining memory profiles over time (pprof).我广泛分析了 golang 代码,包括随着时间的推移获得 memory 个配置文件 (pprof)。 No sign of a leak in the actual golang code, which tallies with correct operation in VM based environment and observations from top.实际的 golang 代码没有泄漏的迹象,这与基于 VM 的环境中的正确操作和来自顶部的观察结果相符。

The OOM message below also suggests that the total RSS used by the pod was only 38.75MiB (sum/RSS = 9919 pages *4k = 38.75MiB).下面的 OOM 消息还表明 pod 使用的 RSS 总量仅为 38.75MiB(总和/RSS = 9919 页 *4k = 38.75MiB)。

kernel: [651076.945552] xxxxxxxxxxxx invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=999
kernel: [651076.945556] CPU: 35 PID: 158127 Comm: xxxxxxxxxxxx Not tainted 5.4.0-73-generic #82~18.04.1
kernel: [651076.945558] Call Trace:
kernel: [651076.945567]  dump_stack+0x6d/0x8b
kernel: [651076.945573]  dump_header+0x4f/0x200
kernel: [651076.945575]  oom_kill_process+0xe6/0x120
kernel: [651076.945577]  out_of_memory+0x109/0x510
kernel: [651076.945582]  mem_cgroup_out_of_memory+0xbb/0xd0
kernel: [651076.945584]  try_charge+0x79a/0x7d0
kernel: [651076.945585]  mem_cgroup_try_charge+0x75/0x190
kernel: [651076.945587]  __add_to_page_cache_locked+0x1e1/0x340
kernel: [651076.945592]  ? scan_shadow_nodes+0x30/0x30
kernel: [651076.945594]  add_to_page_cache_lru+0x4f/0xd0
kernel: [651076.945595]  pagecache_get_page+0xea/0x2c0
kernel: [651076.945596]  filemap_fault+0x685/0xb80
kernel: [651076.945600]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945601]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945602]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945603]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945604]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945605]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945606]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945608]  ? filemap_map_pages+0x181/0x3b0
kernel: [651076.945611]  ext4_filemap_fault+0x31/0x50
kernel: [651076.945614]  __do_fault+0x57/0x110
kernel: [651076.945615]  __handle_mm_fault+0xdde/0x1270
kernel: [651076.945617]  handle_mm_fault+0xcb/0x210
kernel: [651076.945621]  __do_page_fault+0x2a1/0x4d0
kernel: [651076.945625]  ? __audit_syscall_exit+0x1e8/0x2a0
kernel: [651076.945627]  do_page_fault+0x2c/0xe0 
kernel: [651076.945628]  page_fault+0x34/0x40
kernel: [651076.945630] RIP: 0033:0x5606e773349b 
kernel: [651076.945634] Code: Bad RIP value.
kernel: [651076.945635] RSP: 002b:00007fbdf9088df0 EFLAGS: 00010206
kernel: [651076.945637] RAX: 0000000000000000 RBX: 0000000000004e20 RCX: 00005606e775ce7d
kernel: [651076.945637] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fbdf9088dd0
kernel: [651076.945638] RBP: 00007fbdf9088e48 R08: 0000000000006c50 R09: 00007fbdf9088dc0
kernel: [651076.945638] R10: 0000000000000000 R11: 0000000000000202 R12: 00007fbdf9088dd0
kernel: [651076.945639] R13: 0000000000000000 R14: 00005606e7c6140c R15: 0000000000000000
kernel: [651076.945640] memory: usage 65536kB, limit 65536kB, failcnt 26279526
kernel: [651076.945641] memory+swap: usage 65536kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] kmem: usage 37468kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] Memory cgroup stats for /kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe:
kernel: [651076.945652] anon 25112576
kernel: [651076.945652] file 0
kernel: [651076.945652] kernel_stack 221184
kernel: [651076.945652] slab 41406464
kernel: [651076.945652] sock 0
kernel: [651076.945652] shmem 0
kernel: [651076.945652] file_mapped 2838528
kernel: [651076.945652] file_dirty 0
kernel: [651076.945652] file_writeback 0 
kernel: [651076.945652] anon_thp 0
kernel: [651076.945652] inactive_anon 0
kernel: [651076.945652] active_anon 25411584
kernel: [651076.945652] inactive_file 0
kernel: [651076.945652] active_file 536576
kernel: [651076.945652] unevictable 0
kernel: [651076.945652] slab_reclaimable 16769024
kernel: [651076.945652] slab_unreclaimable 24637440
kernel: [651076.945652] pgfault 7211542
kernel: [651076.945652] pgmajfault 2895749
kernel: [651076.945652] workingset_refault 71200645
kernel: [651076.945652] workingset_activate 5871824
kernel: [651076.945652] workingset_nodereclaim 330
kernel: [651076.945652] pgrefill 39987763
kernel: [651076.945652] pgscan 144468270 
kernel: [651076.945652] pgsteal 71255273 
kernel: [651076.945652] pgactivate 27649178
kernel: [651076.945652] pgdeactivate 33525031
kernel: [651076.945653] Tasks state (memory values in pages):
kernel: [651076.945653] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name   
kernel: [651076.945656] [ 151091]     0 151091      255        1    36864        0          -998 pause  
kernel: [651076.945675] [ 157986]     0 157986       58        4    32768        0           999 dumb-init  
kernel: [651076.945676] [ 158060]     0 158060    13792      869   151552        0           999 su  
kernel: [651076.945678] [ 158061]  1234 158061    18476     6452   192512        0           999 yyyyyy
kernel: [651076.945679] [ 158124]  1234 158124     1161      224    53248        0           999 sh  
kernel: [651076.945681] [ 158125]  1234 158125   348755     2369   233472        0           999 xxxxxxxxxxxx
kernel: [651076.945682] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,mems_allowed=0-3,oom_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe,task_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe/a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,task=yyyyyy,pid=158061,uid=1234
kernel: [651076.945695] Memory cgroup out of memory: Killed process 158061 (yyyyyy) total-vm:73904kB, anon-rss:17008kB, file-rss:8800kB, shmem-rss:0kB, UID:1234 pgtables:188kB oom_score_adj:999
kernel: [651076.947429] oom_reaper: reaped process 158061 (yyyyyy), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

The OOM message clearly states that usage = 65536kB, limit = 65536kB, but I don't immediately where the approximately 25MiB of memory not accounted for under RSS has gone. OOM 消息明确指出 usage = 65536kB,limit = 65536kB,但我并没有立即了解 memory 中未在 RSS 下计算的大约 25MiB 已经消失的地方。

I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for, not sure if there is any significant in this though.我看到 slab_unreclaimable = 24637440, (24MiB),这大约是似乎下落不明的 memory 的数量,但不确定这是否有任何重要意义。

Looking for any suggestions as to where the memory is being used.寻找有关在哪里使用 memory 的任何建议。 Any input would be most welcome.欢迎任何意见。

I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for...

For slab details you can try the command slabinfo or do cat /proc/slabinfo .有关平板的详细信息,您可以尝试命令slabinfo或执行cat /proc/slabinfo The table could point you to where the memory has gone to.该表可以向您指出 memory 去了哪里。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 K8s没有杀死我的气流网络服务器吊舱 - K8s did not kill my airflow webserver pod Qt应用程序被杀死,因为内存不足(OOM) - Qt application killed because Out Of Memory (OOM) 由于OOM杀死任务时,日志中内核报告内存使用情况 - memory usage reporting by kernel in log when task is killed due to OOM mongod被OOM kill杀死:“Memory cgroup out of memory:杀死进程20391(mongod)得分993或牺牲孩子” - mongod killed by OOM kill: "Memory cgroup out of memory: Kill process 20391 (mongod) score 993 or sacrifice child" 将我的 k8s 中的 pod 日志重定向到具有 pod 名称的文件 - Redirect logs of pods in my k8s to a file with pod name 在从 AKS k8s pod 触发电子邮件方面需要帮助 - Need help in triggering emails from an AKS k8s pod golang k8s 客户端中的 Pod 重新部署触发器 - Pod redeploy trigger in golang k8s client 内存泄漏会导致我的进程被杀死吗? - Can a memory leak cause my process to get killed? 在 Linux 上被错误杀死的任务是否可以被视为 memory 泄漏? - Can a task that is killed incorrectly on Linux be considered a memory leak? 代码执行分配所有内存,直到它可能在代码初始化时被 OOM (linux) 杀死。 调试此类问题的想法? - Code execution allocates all memory until it is killed by OOM (linux) probably at code initialization. Ideas for debugging such a problem?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM