简体   繁体   English

增加分区大小以在hazelcast映射中包含更多数据

[英]Increase partition size to include more data in hazelcast map

I am using hazelcast as cache cluster . 我正在使用hazelcast作为cache cluster

I have three map store for three tables in mysql. 我在mysql中有三个表的三个映射存储。 Each table has 100000 records. 每个表有100000条记录。 I am starting three instance of hazelcast to load these tables in my map with different maps . 我正在启动hazelcast三个实例,以使用different maps这些表加载到我的different maps But issue. 但是问题。

When i start first instance and call load all for first table it loads all 100000 entries but when i call second instance to load second table with same process it loads 49449 entries and in third instance it only loads 33249 entries only. 当我启动第一个实例并为第一个表调用全部加载时,它将加载所有100000条目,但是当我调用第二个实例以相同的过程加载第二个表时,它将加载49449个条目,而在第三个实例中,它仅加载33249个条目。

I am using three different java codes to load these tables. 我正在使用三种不同的Java代码来加载这些表。 My partition size default is 271 I was looking for error but there is no error shown and data loaded always the same as explained above. 我的分区大小默认值为271我一直在寻找错误,但未显示任何错误,并且加载的数据始终与上述相同。 Could you please help me what could be the issue. 您能帮我个问题吗?

Can you log the number of keys that have been loaded by each of your MapLoader/MapStore implementation when the MapLoader.loadAllKeys() is called? 您可以记录调用MapLoader.loadAllKeys()时每个MapLoader / MapStore实现已加载的键数吗?

Perhaps this can shed some light on the situation. 也许这可以为情况提供一些启示。 If the number isn't 100k here, then the issue is not inside HZ. 如果此处的数字不是10万,则问题不在HZ内部。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM