简体   繁体   中英

Cache Cluster deployment topology

I'm going to deploy an in-memory cache cluster (current thinking Redis) for some public facing web workloads and was wondering where the cluster should live (deployment topology), two options IMO:

  1. Sitting on the Web tier (which is horizontally scalable)
  2. Create a dedicated cache cluster behind the Web Tier and in-front of the DB Tier.

Background, application on Web and DB Tier running on Windows, so if I stick the cluster on the Web Tier then it needs to be supported on Windows (MSFT have a stable Redis port), if I go with the dedicated cache tier I was thinking of some lightweight Linux servers (HA cluster) meaning as the Web Tier horizontally scaled it used this cache cluster for its lookups eg reference data etc.

Pros, cons thoughts, other option I'm missing?

*Note, I don't have the luxury of utilising a cloud service provider "cache as a service", not an option unfortunately ...

Cheers,

Surprised at the lack of community support around Redis and caching in general.

To answer my question, I ended up going with a Linux (RHEL) master/slave Redis cache tier, opted for master/slave deployment topology giving me HA at the cache tier (as opposed to a Redis cache cluster). Master gives me writes, master/slave allows for reads. Suits my needs as I will go to the DB on a cache miss, configured Redis to never persist to disk (in-memory only).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM