简体   繁体   中英

Using endpoints of AWS ElastiCache for Redis

I am using AWS ElastiCache for Redis as the caching solution for my spring-boot application. I am using spring-boot-starter-data-redis and jedis client to connect with my cache.

Imagine that I am having my cache in cluster-mode-enabled and 3 shards with 2 nodes in each. I agree then the best way of doing it is using the configuration-endpoint . Alternatively, I can list all the endpoints of all nodes and let the job done.

However, even if I use a single node's endpoint from one of the shards, my caching solution works. That doesn't looks right to me. I feel even if it works, that might case problems in the cluster in long run. When there are all together 6 nodes partitioned into 3 shards but only using one node's endpoint. I have following questions.

Is using one node's endpoint create an imbalance in the cluster?

or

Is that handled automatically by the AWS ElastiCache for Redis ?

If I use only one node's endpoint does that mean the other nodes will never being used?

Thank you!

To answer your questions;

  1. Is using one node's endpoint create an imbalance in the cluster? NO

  2. Is that handled automatically by the AWS ElastiCache for Redis? Somewhat

  3. if I use only one node's endpoint does that mean the other nodes will never being used? No. All nodes are being used.

This is how Cluster Mode Enabled works. In your case, you have 3 shards meaning all your slots (where key-value data is stored) are divided into 3 sub-clusters ie. shards.

This was explained in this answer as well - https://stackoverflow.com/a/72058580/6024431

So, essentially, your nodes are smart enough to re-direct your requests to the nodes that has the key-slot where your data needs to be stored. So, no imbalances. Redis handles the redirection for you.

Now, while using Node endpoints, you're going to be facing other problems. Elasticache is running on cloud (which is essentially AWS Hardware). All hardware faces issues. You have 3 primaries (1p, 2p, 3p) and 3 (1r, 2r, 3r) replicas. So, if a primary goes down due to hardware issue (lets say 1p), the replica will get promoted to become the new Primary for the cluster (1r). Now the problem would be, your application is connected directly to 1p which has now been demoted to replica. So, all the WRITE operations will fail.

And you will have to change the application code manually whenever this happens.

Alternatively, if you were using configurational endpoint (or other cluster level endpoints) instead of node-endpoints, this issue would only be a blip to your application at most, perhaps for 1-2 seconds.

Cheers!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM