简体   繁体   中英

Elasticsearch - Keep hitting "429 Too Many Requests" error

Been running an AWS m4.large.elasticsearch Elasticsearch (service) instance with 2 data nodes for more than a year now without any severe issues. Because of the increased demand we have set up 2 additional r6g.large Elasticsearch instances (which have the same amount of vCPU and memory as the m4.large, but should even offer better performance according to the docs).

Ever since using these we have been getting "429 Too Many Requests" errors inside our application. After some digging on https://aws.amazon.com/es/premiumsupport/knowledge-center/resolve-429-error-es/ the following things have been tried without success:

  • Increasing the circuit breaker limit to 90% => Does not solve the issue
  • Switching to c6g.xlarge (Compute optimized instances with double the capacity) => Does not solve the issue
  • Enabled slow search logs + error logs in the hope of getting more info => Nothing is being logged

If anyone has an idea on how we could go around solving this that would be much appreciated!

PS: "Old" version is running Elasticsearch 7.7 while the new one is running 7.10, but would be astonished that this would be the cause.

A 429 error message as a write rejection indicates a bulk queue error.A bulk queue on each node can hold between 50 and 200 requests, depending on which Elasticsearch version you are using. There have been multiple reports on this and older version of Elasticache is the probable root cause.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM