简体   繁体   中英

How much is the overhead of an Elastic search empty index?

I have a cluster with a single node. The machine has 8 GB ram and ES process is assigned 6 GB ram. I have a total of 531 shards (522 indices) running on that node. Most of the shards contain almost no data.

Here are the stats:

Total documents: 265743

Deleted documents: 27069

Total size: 136923957 bytes (130.5 MB)

Fielddata: 250632 bytes

filter_cache: 9984 bytes

segments: (total:82 memory_in_bytes: 3479988)

Heap committed is 5.9 GB and used is 5.6 GB.

If I create few more indices in the cluster the node stats doing GC and eventually goes OOM. I know there are a lot of faults in this configuration (only one node, 6 GB given out of 8 GB).

I want to know how is the memory being used up. Total document, filter cache, field data add up to almost nothing, still I am using up all the memory.

In my personal experience with ES 1.x and 2.x the per shard overhead is not trivial and is usually in the range of a few MB/shard. As I understand it, this is memory reserved for indexing buffers, state metadata, references to lucene objects, caching objects, etc.

Basically a bit of memory is reserved to be able to index quickly and start caching if it is needed. I don't know how much of this is still true in the 5.x version.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM