简体   繁体   中英

What would be the best self preservation time parameter configuration for Eureka server for 12 Microservices and 2 instance for each Microservice

I am having 12 Microservices including 1 Eureka server and 1 API Gateway deployed on AWS ( Kubernetes + Docker images ), but I am facing issue of frequently getting register deregister issue with Microservices, the Microservices count 12 is not constant on Eureka Server Dashboard, sometime shows 10, sometime 7 and again 12 after every time on page refresh.

And because of this behavior I am getting Forwarding Error at API gateway, cause by Load Balancer Does not available for the Client....

Also on Eureka server dashboard it show below Error

EMERGENCY. EUREKA MAY BE INCORRECTLY CLAIMING INSTANCES ARE UP WHEN THEY'RE NOT. RENEWALS ARE LESSER THAN THRESHOLD AND HENCE THE INSTANCES ARE NOT BEING EXPIRED JUST TO BE SAFE.

I am having below configuration for self preservation and it looks like something is wrong with that, Could you please help me to fix this issue.

eureka.server.eviction-internal-timer-in-ms=15000
eureka.instane.leaseRenewalIntervalInSeconds=30
eureka.instance.leaseExpirationDurationInSeconds=90
eureka.renewalPercentThreshold=0.85

Also using Some hystrix properties as below -

hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=30000000
ribbon.ReadTimeout=3000000
ribbon.ConnectTimeout=1000000
zuul.host.socket-timeout-millis=1000000

In the K8s environment, instances are not deleted after self-protection is enabled. As a result, some requests are connected to non-existent instances

Can try to disable it: eureka.server.enableSelfPreservation=false

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM