简体   繁体   English

Elasticsearch增加堆大小

[英]Elasticsearch increasing heap size

We running Elasticsearch inside a docker container on Amazons ECS. 我们在Amazons ECS上的docker容器内运行Elasticsearch。 We noticed that the heap slightly increase over time. 我们注意到堆随着时间的推移略有增加。 The first time we noticed it was when it raised above 70% and started to throw away requests (indices.breaker.total.limit). 我们第一次注意到它是当它筹集到70%以上并开始丢弃请求时(indices.breaker.total.limit)。

The thing is that I never seen a decreasing heap, feels fishy! 事实是,我从未见过减少的堆,感到可疑!

So far we have increased the instance size, now running a instance with 30G memory. 到目前为止,我们已经增加了实例大小,现在运行具有30G内存的实例。 The heap is set to aprox half the memory, ES_HEAP_SIZE=14g (Xmx=Xms=14g). 堆设置为内存的一半左右,ES_HEAP_SIZE = 14g(Xmx = Xms = 14g)。

Someone else that have similar experience? 其他有类似经验的人吗? Is it a bug in Elasticsearch? 这是Elasticsearch中的错误吗? Or only incorrect configurated? 还是只有错误配置?

Elasticsearch ver: 1.5.1 Elasticsearch版本:1.5.1

> curl localhost:9200/_cat/fielddata?v

id                     host         ip         node            total position deal_status heading.sortorder org_relationlastmodified deal_value deal_probability 1_ZipCodeVisit_0   tag employer_tag 1_CityVisit_0 dateofregistration temperature uniqueId _type quick_ratio org_relation employer_relationlastmodified turnover turnover_per_employee deal_source employer_relation deal_statusdate 1_custom_1466 average_salary_per_employee deal_orderdate 0_NumberOfEmployeesRange_20 1_LegalForm_0 1_custom_1816 0_WorksiteType_100 0_LineOfBusiness_2 equity_ratio profitmargin 0_LineOfBusiness_1 0_CountyVisit_40 0_NumberOfEmployeesRange_22 0_MunicipalityVisit_61 0_LegalForm_110 dividends 1_custom_1744 0_MunicipalityVisit_60 responsiblecoworker result_before_tax
XMTlkdnsToKvMHqgApMBAg 5dc819096905 172.17.0.2 Hitman        729.8mb    8.1mb       1.1mb           261.5mb                    1.7mb    305.3kb          849.1kb           20.9mb 6.4mb        1.3mb        19.3mb             12.3mb          0b  283.7mb 9.6mb       5.1mb      810.5kb                       632.2kb   11.6mb                 4.1mb     150.8kb           566.4kb         568.6kb        34.1kb                       4.2mb        973.5kb                       5.7mb         4.6mb        37.4kb              4.9mb              8.1mb        4.7mb        4.2mb              9.2mb            3.3mb                       4.2mb                802.9kb           3.9mb     4.3mb        37.7kb                  7.5mb               2.4mb               5mb
dHAoWkHMQKSnwAB0KrJRJw 8ffc068518f9 172.17.0.2 Moira Brandon 718.9mb    8.2mb       1.1mb           261.5mb                    1.3mb      124kb          793.3kb           19.6mb 6.4mb          1mb        19.1mb             10.2mb          0b  283.8mb 9.6mb       5.2mb      714.7kb                       791.3kb    8.8mb                 3.4mb          0b           422.6kb          83.9kb        16.8kb                       4.6mb        989.4kb                       5.6mb         4.5mb            0b              5.2mb              7.9mb        4.1mb        4.3mb                9mb            3.2mb                       4.3mb                     0b           3.8mb     4.3mb            0b                  7.1mb               2.5mb             4.4mb

[Update 2016-10-24] We have updated to version 2.4.0 but we still experience the same problem. [更新2016-10-24]我们已经更新到版本2.4.0,但是我们仍然遇到相同的问题。 If I force a GC, the heap is released to about 4%, that's the same value as a fresh instance. 如果我强制使用GC,则堆释放量约为4%,与新实例的释放量相同。

Example for an instance with 73% heap, the jvm mem shows that old one is about 10G, not sure if that's normal 对于具有73%堆的实例的示例,jvm mem显示旧的大约10G,不确定那是否正常

jvm mem heap percent 73%
"young":    "used_in_bytes" : 199026920
"survivor": "used_in_bytes" : 2422528
"old":      "used_in_bytes" : 10754631392

What triggers a GC? 是什么触发了GC? Should we let the heap increase above 70%? 我们应该让堆增加到70%以上吗?

This maybe related to this kind-of known behavior in pre 2.X versions, that affect mainly Kibana, but I guess also elasticsearch. 这可能与这种-的预2.X版本,影响主要Kibana已知的行为,但我想也elasticsearch。

See this github issue : https://github.com/elastic/kibana/issues/5170 看到这个github问题: https : //github.com/elastic/kibana/issues/5170

It may be the same in your case, which basically boils down to this node issue : https://github.com/nodejs/node/issues/2683 在您的情况下可能是相同的,基本上可以归结为以下节点问题: https : //github.com/nodejs/node/issues/2683

It may also be a configuration in ES that is not ideal. ES中的配置也可能不理想。 Look for this usual suspect in elasticsearch config: 在elasticsearch配置中寻找这个通常的可疑对象:

bootstrap.mlockall: true

Do you have a lot of shards / replicas ? 您有很多分片/副本吗? Do you use Kibana as well ? 您也使用Kibana吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM