简体   繁体   English

为什么弹性搜索容器的内存使用量会随着使用量的增加而持

[英]Why elastic-search container memory usage keeps increasing with little use?

I have deployed Elastic-search container in aws using eks kubernetes cluster. 我使用eks kubernetes集群在aws中部署了Elastic-search容器。 The memory usage of the container keeps increasing even though there are only 3 indices and not used heavily. 即使只有3个索引而且没有大量使用,容器的内存使用量也在不断增加。 I am dumping cluster container logs into elastic search using FluentD. 我正在使用FluentD将集群容器日志转储到弹性搜索中。 Other than this, there is no use of elastic-search. 除此之外,没有使用弹性搜索。 I tried applying min/max heap size using -Xms512m -Xmx512m . 我尝试使用-Xms512m -Xmx512m应用最小/最大堆大小。 It applies successfully but still, the memory usage gets almost doubled in 24 hours. 它成功应用但仍然在24小时内内存使用量几乎翻了一番。 I am not sure what other options do i have to configure. 我不确定我还需要配置哪些其他选项。 I tried changing docker image from elasticsearch:6.5.4 to elasticsearch:6.5.1 . 我尝试将docker图像从elasticsearch:6.5.4更改为elasticsearch:6.5.1 But issue persists. 但问题仍然存在。 I also tried -XX:MaxHeapFreeRatio=50 java option. 我也试过-XX:MaxHeapFreeRatio=50 java选项。

Check the screenshot from kibana. 检查kibana的屏幕截图。 在此输入图像描述

Edit : Following are logs from Elastic-search start-up : 编辑:以下是来自弹性搜索启动的日志:

[2019-03-18T13:24:03,119][WARN ][o.e.b.JNANatives         ] [es-79c977d57-v77gw] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives         ] [es-79c977d57-v77gw] This can result in part of the JVM being swapped out.
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives         ] [es-79c977d57-v77gw] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives         ] [es-79c977d57-v77gw] These can be adjusted by modifying /etc/security/limits.conf, for example: 
    # allow user 'elasticsearch' mlockall
    elasticsearch soft memlock unlimited
    elasticsearch hard memlock unlimited
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives         ] [es-79c977d57-v77gw] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2019-03-18T13:24:03,397][INFO ][o.e.e.NodeEnvironment    ] [es-79c977d57-v77gw] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda1)]], net usable_space [38.6gb], net total_space [96.8gb], types [ext4]
[2019-03-18T13:24:03,397][INFO ][o.e.e.NodeEnvironment    ] [es-79c977d57-v77gw] heap size [503.6mb], compressed ordinary object pointers [true]
[2019-03-18T13:24:03,469][INFO ][o.e.n.Node               ] [es-79c977d57-v77gw] node name [es-79c977d57-v77gw], node ID [qrCUCaHoQfa3SXuTpLjUUA]
[2019-03-18T13:24:03,469][INFO ][o.e.n.Node               ] [es-79c977d57-v77gw] version[6.5.1], pid[1], build[default/tar/8c58350/2018-11-16T02:22:42.182257Z], OS[Linux/4.15.0-1032-aws/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-03-18T13:24:03,469][INFO ][o.e.n.Node               ] [es-79c977d57-v77gw] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.oEmM9oSp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-03-18T13:24:05,082][INFO ][o.e.p.PluginsService     ] [es-79c977d57-v77gw] loaded module [aggs-matrix-stats]
[2019-03-18T13:24:05,082][INFO ][o.e.p.PluginsService     ] [es-79c977d57-v77gw] loaded module [analysis-common]
[2019-03-18T13:24:05,082][INFO ][o.e.p.PluginsService     ] [es-79c977d57-v77gw] loaded module [ingest-common] ....

Pod memory usage in Kubernetes isn't equivalent to JVM memory usage--to get that stat you'd have to pull the metric from the JVM directly. Kubernetes中的Pod内存使用量并不等同于JVM内存使用量 - 要获得该统计信息,您必须直接从JVM中提取度量标准。 Pod memory usage, depending on the metric you're querying, can also include page cache and swap space, in addition to application memory, so there's no telling from the graph you've provided what is actually consuming memory here. 根据您查询的指标,Pod内存使用情况还可以包括页面缓存和交换空间,以及应用程序内存,因此从图表中您无法提供实际消耗内存的内容。 Depending on what the problem is, Elasticsearch has advanced features like memory locking , which will lock your process address space in RAM. 根据问题所在,Elasticsearch具有内存锁定等高级功能,可将您的进程地址空间锁定在RAM中。 However, a surefire way to keep a Kubernetes pod from eating up non-JVM memory is simply to set a limit to how much memory that pod can consume. 但是,保持Kubernetes pod不占用非JVM内存的一种可靠方法就是设置pod可以消耗多少内存的限制。 In your Kubernetes pod spec set resources.limits.memory to your desired memory cap and your memory consumption won't stray beyond that limit. 在你的Kubernetes pod规范中,将resources.limits.memory设置为你想要的内存上限,你的内存消耗不会偏离这个限制。 Of course, if this is a problem with your JVM configuration, the ES pod will fail with an OOM error when it hits the limit. 当然,如果这是您的JVM配置的问题,ES pod将在达到限制时因OOM错误而失败。 Just make sure you're allocating additional space for system resources, by which I mean, your pod memory limit should be somewhat greater than your max JVM heap size. 只是确保为系统资源分配额外的空间,我的意思是,你的pod内存限制应该比你的最大JVM堆大小稍大。

On another note, you might be surprised how much logging Kubernetes is actually doing behind the scenes. 另外,你可能会惊讶于Kubernetes在幕后实际做了多少。 Consider periodically closing Elasticsearch indexes that aren't being regularly searched to free up memory. 请考虑定期关闭未经常搜索的Elasticsearch索引以释放内存。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM