[英]Elasticsearch cluster design for ~200G logs a day
I've created ES cluster (version 5.4.1) with 4 data nodes, 3 master, one client node (kibana). 我创建了具有4个数据节点,3个主节点,1个客户端节点(kibana)的ES群集(5.4.1版)。
The data nodes are r4.2xlarge aws instance (61g memory, 8vCPU) with 30G memory allocated for the ES JAVA. 数据节点是r4.2xlarge aws实例(61g内存,8vCPU),为ES JAVA分配了30G内存。
We're writing around 200G of logs every day and keep it for the last 14 days. 我们每天要写大约200G的日志,并保留过去14天。
I'm looking for recommendations to our cluster to improve the cluster performance, especially the search performance (kibana). 我正在为我们的集群寻求建议,以提高集群性能,尤其是搜索性能(菊苣)。
More data nodes? 更多数据节点? more client nodes?
更多的客户端节点? bigger nodes?
更大的节点? more replica's?
更多副本? anything that can improve the performance is an option.
任何可以提高性能的选项都是可选的。
Is there anyone with something close to this design or loads? 有没有人接近这个设计或负载? I'll be happy to hear about other designs and loads.
我很高兴听到其他设计和负载。
Thanks, Moshe 谢谢,Moshe
Wild guess: You are limited on IO. 大胆的猜测:您在IO方面受到限制。 Prefer local disks over EBS, prefer SSDs over spinning disks, and if you can, get as many IOPS as you can afford for that use-case.
与EBS相比,本地磁盘更受欢迎,与旋转磁盘相比,SSD更受欢迎。如果可以的话,可以得到尽可能多的IOPS。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.