简体   繁体   English

Elasticsearch:高CPU使用率

[英]Elasticsearch: high CPU usage

I'm running ES on an AWS EC2 t2.small instance, and I am experiencing a sudden and massive drop in CPU credits every once in a while. 我在AWS EC2 t2.small实例上运行ES,并且每隔一段时间就会遇到CPU信用突然大量下降的情况。

https://www.dropbox.com/s/0pw0qfudoca899f/cpu_credits.png?dl=0 https://www.dropbox.com/s/0pw0qfudoca899f/cpu_credits.png?dl=0

The drop started on a monday, which is when we create 4 new logging indices for that weeks logs. 下降开始于星期一,也就是我们为该周日志创建4个新日志索引的时候。 We currently have ~60 logging indices which mostly just get insert requests and rarely any searching is done. 当前,我们有约60个日志记录索引,这些索引大多仅获得插入请求,而很少执行任何搜索。 We also have about 30 indices that are actively searched against and at least 10 of them get regular bulk updates. 我们还积极搜索约30个索引,其中至少有10个会定期进行批量更新。

Last time I was faced with an issue like this, I deleted a bunch of old indices and that seemed to help, however I would prefer avoiding that. 上次遇到这样的问题时,我删除了一堆旧索引,这似乎有所帮助,但是我宁愿避免这种情况。

What are the most common reasons for high resource usage? 高资源使用率的最常见原因是什么? Amount of indices? 指标数量? Amount of records in them? 其中有多少记录? Amount of shards allocated? 分配的分片数量? Amount of updates to records or mapping (there are a some indices with thousands of fields)? 记录或映射的更新量(有些索引包含数千个字段)?

Let me know if there's any information I could provide, and thank you in advance for any help on clearing this issue. 让我知道我是否可以提供任何信息,并在此先感谢您为解决此问题提供的任何帮助。


EDIT 1: 编辑1:

Output from _cat/indices?v _cat / indices的输出?

Output from _nodes/stats _nodes / stats的输出

So with a t2.small I suppose you have 1GB of RAM allocated to the ES heap, right? 因此,对于t2.small,我想您已经为ES堆分配了1GB的RAM,对吗? One thing that I notice is that given the very small size of your indices (< 100mb), you have way too many shards, a single shards would be more than enough. 我注意到的一件事是,鉴于索引的大小非常小(<100mb),您拥有太多的分片,一个分片就足够了。 Since each shard consumes resources, you'd be way better off. 由于每个分片都消耗资源,因此您的状况会更好。

One thing you can do is to consolidate all your indices, ie put all go_request_data-2016 weekly indices into a yearly one with a single shard, etc. You'd probably end up with way many indices and shards without having to delete any data 您可以做的一件事是合并所有索引, go_request_data-2016所有go_request_data-2016每周索引放入一个带有单个分片的年度索引中, go_request_data-2016 。您可能最终会获得许多索引和分片,而不必删除任何数据

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM