简体   繁体   English

如何在Hazelcast群集中配置复制缓存?

[英]How to configure replicated cache in Hazelcast cluster?

My Spring application consists of dozen microservices. 我的Spring应用程序包含十几个微服务。 Each microservice provides data, which does not change very often. 每个微服务提供的数据不会经常变化。 In the order to reduce communication between microservices I am considering to start using Hazelcast. 为了减少微服务之间的通信,我正在考虑开始使用Hazelcast。

My idea is that every microservice would have embedded Hazelcast. 我的想法是每个微服务都会嵌入Hazelcast。 Microservices are running in same network and I suppose that Hazelcasts would form a cluster. 微服务在同一个网络中运行,我想Hazelcasts会形成一个集群。 Every microservice will put its data into local Hazelcast on startup and data will be copied to every other Hazelcast in the cluster. 每个微服务都会在启动时将其数据放入本地Hazelcast,数据将被复制到集群中的每个其他Hazelcast。 When a microservice would need to load data from other microservice, it would first look into local Hazelcast and only if data are missed from local cache, it would make network call. 当微服务需要从其他微服务加载数据时,它首先会查看本地Hazelcast,并且只有当数据从本地缓存中丢失时才会进行网络调用。

Is it possible to configure something like this with Hazelcast? 是否可以使用Hazelcast配置类似的东西? I already made a try, but data from a microservice happened to be distributed across all Hazelcast nodes in cluster. 我已经尝试过,但微服务的数据恰好分布在集群中的所有Hazelcast节点上。

I used very trivial configuration: 我使用了非常简单的配置:

@Configuration
@EnableCaching
@Profile("hazelcast")
public class HazelcastCacheConfiguration {
    @Bean
    public Config hazelcastConfig() {
        return new Config()
                .setInstanceName("routes-cache")
                .addMapConfig(
                        new MapConfig()
                                .setName("ports-cache")
                                .setEvictionPolicy(EvictionPolicy.LRU)
                ).addMapConfig(
                        new MapConfig()
                                .setName("routes-cache")
                                .setEvictionPolicy(EvictionPolicy.LRU)
                ).setProperty("hazelcast.logging.type", "slf4j");
    }
}

I checked data replication across the cluster in Hazelcast Management Center. 我在Hazelcast管理中心检查了群集中的数据复制。 My sample set of data has only 13 records. 我的样本数据集只有13条记录。 The microservice has pushed 13 records on startup into local Hazelcast and in Management Center I have seen that there are 2 nodes in the cluster with 9 records on one node and 4 records on the node of other microservice. 微服务在启动时将13条记录推送到本地Hazelcast,在管理中心我看到群集中有2个节点,一个节点上有9条记录,其他微服务节点上有4条记录。

Thank you in advance! 先感谢您!

Hazelcast IMap is a partitioned data structure: each entry is mapped to a partition (based on hashing its key) and each member is designated as owner or backup of some partitions. Hazelcast IMap是一种分区数据结构: 每个条目都映射到一个分区(基于散列其密钥) ,每个成员都被指定为某些分区的所有者或备份。 What you describe can be accomplished by configuring a near cache on your IMap like this: 您所描述的内容可以通过在IMap上配置近缓存来实现,如下所示:

@Bean public Config hazelcastConfig() { NearCacheConfig routesNearCache = new NearCacheConfig("routes-near-cache") .setInMemoryFormat(InMemoryFormat.OBJECT); return new Config() .setInstanceName("routes-cache") .addMapConfig( new MapConfig() .setName("routes-cache") .setEvictionPolicy(EvictionPolicy.LRU) .setNearCacheConfig(routesNearCache) ); // continue with the rest of config here }

The near cache is a local cache of entries and works transparently on top of the IMap . 近缓存是条目的本地缓存,并且在IMap之上透明地工作。 First time you do a routesCache.get(K) , the entry will be fetched from the (possibly remote) member that owns the partition to which K is mapped. 第一次执行routesCache.get(K) ,将从拥有K映射到的分区的(可能是远程的)成员获取该条目。 The value is then cached in the local near cache and each subsequent routesCache.get(K) will be served locally. 然后将该值缓存在本地near缓存中,并且每个后续routesCache.get(K)将在本地提供。 Near cache works with invalidation events (for example when a put happens on a particular key, that key's entry is removed from each near cache) to ensure values are maintained up-to-date. 近缓存处理失效事件(例如,当特定键上发生put时,该键的条目从每个近缓存中移除)以确保值保持最新。

Another alternative you may consider is using a ReplicatedMap : in this case, each member maintains a full copy of all the data in the map, so reads are always local. 您可以考虑的另一个替代方法是使用ReplicatedMap :在这种情况下,每个成员都维护地图中所有数据的完整副本,因此读取始终是本地的。 You may consider this data structure if your dataset fits in each member's memory and your use case is mostly reads. 如果您的数据集适合每个成员的内存并且您的用例主要是读取,则可以考虑此数据结构。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM