简体   繁体   中英

Cache a redis cluster locally

I have a scenario where we want to use redis , but I am not sure how to go about setting it up. Here is what we want to achieve eventually:

  1. A redundant central redis cluster where all the writes will occur with servers in two aws regions.

  2. Local redis caches on servers which will hold a replica of the complete central cluster.

The reason for this is that we have many servers which need read access only, and we want them to be independent even in case of an outage (where the server cannot reach the main cluster).

I know there might be a "stale data" issue withing the caches, but we can tolerate that as long as we get eventual consistency.

What is the correct way to achieve something like that using redis ?

Thanks!

You need the Redis Replication (Master-Slave) Architecture .

Redis Replication :

Redis主从架构

Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:

  • Redis uses asynchronous replication. Starting with Redis 2.8, however, slaves periodically acknowledge the amount of data processed from the replication stream.
  • A master can have multiple slaves.
  • Slaves are able to accept connections from other slaves. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a cascading-like structure.
  • Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more slaves perform the initial synchronization.
  • Replication is also non-blocking on the slave side. While the slave is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis slaves to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The slave will block incoming connections during this brief window (that can be as long as many seconds for very large datasets).
  • Replication can be used both for scalability, in order to have multiple slaves for read-only queries (for example, slow O(N) operations can be offloaded to slaves), or simply for data redundancy.
  • It is possible to use replication to avoid the cost of having the master write the full dataset to disk: a typical technique involves configuring your master redis.conf to avoid persisting to disk at all, then connect a slave configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the slave tries to synchronized with it, the slave will be emptied as well.

Go through the Steps : How to Configure Redis Replication .

So I decided to go with redis-sentinel .

Using a redis-sentinel I can set the slave-priority on the cache servers to 0, which will prevent them from becoming masters.

I will have one master set up, and a few "backup masters" which will actually be slaves with slave-priority set to a value which is not 0, which will allow them to take over once the master goes down.

The sentinel will monitor the master, and once the master goes down it will promote one of the "backup masters" and promote it to be the new master.

More info can be found here

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM