I have a swarm cluster with 24Gb of RAM on each nodes.
Free -g shows 6Gb used but I get some OutMemory errors on some java or elasticsearch containers.
total used free shared buff/cache available Mem: 23 6 6 0 10 16 Swap: 1 0 1
I removed all reservation or limits on containers
Any idea what causes the OutOfMermory ? I did set Xmx on the containers and they are not using to much RAM...
Thanks a lot
I found the problem.
It was a kernel configuration in the sysctl.conf.
I had this :
cat /etc/sysctl.conf |grep vm.
vm.swappiness=10
vm.overcommit_memory=2
vm.dirty_ratio=2
vm.dirty_background_ratio=1
I removed everything setted for DB2 (put back the default configuration) and now I can take advantage of all the RAM of the hosts.
I kept this :
cat /etc/sysctl.conf |grep vm.
vm.swappiness=10
vm.max_map_count=262144
Thanks
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.