简体   繁体   English

Infinispan 9,复制缓存正在过期条目,但不允许从 JVM 堆中删除它们

[英]Infinispan 9, Replicated Cache is Expiring Entries but never allows them to be removed from JVM heap

Was doing some internal testing about a clustering solution on top of infinispan/jgroups and noticed that the expired entries were never becoming eligible for GC, due to a reference on the expiration-reaper, while having more than 1 nodes in the cluster with expiration enabled / eviction disabled.正在对 infinispan/jgroups 之上的集群解决方案进行一些内部测试,并注意到由于对过期收割机的引用,过期的条目永远不会符合 GC 的条件,同时集群中有超过 1 个启用过期的节点/ 驱逐禁用。 Due to some system difficulties the below versions are being used:由于一些系统问题,正在使用以下版本:

  • JDK 1.8 JDK 1.8
  • Infinispan 9.4.20英菲尼斯潘 9.4.20
  • JGroups 4.0.21 JGroups 4.0.21

In my example I am using a simple Java main scenario, placing a specific number of data, expecting them to expire after a specific time period.在我的示例中,我使用了一个简单的 Java 主要场景,放置特定数量的数据,期望它们在特定时间段后过期。 The expiration is indeed happening, as it can be confirmed both while accessing the expired entry and by the respective event listener(if its configured), by it looks that it is never getting removed from the available memory, even after an explicit GC or while getting close to an OOM error.过期确实发生了,因为它可以在访问过期条目和相应的事件侦听器(如果已配置)时确认,它看起来永远不会从可用的 memory 中删除,即使在显式 GC 之后或同时接近 OOM 错误。

So the question is:所以问题是:

Is this really expected as default behavior, or I am missing a critical configuration as per the cluster replication / expiration / serialization?这真的是预期的默认行为,还是我错过了集群复制/到期/序列化的关键配置?

Example:例子:

Cache Manager:缓存管理器:

return new DefaultCacheManager("infinispan.xml");

infinispan.xml: infinispan.xml:

  <jgroups>
     <stack-file name="udp" path="jgroups.xml" />
  </jgroups>

  <cache-container default-cache="default">
     <transport stack="udp" node-name="${nodeName}" />
     <replicated-cache name="myLeakyCache" mode="SYNC">
        <expiration interval="30000" lifespan="3000" max-idle="-1"/>
     </replicated-cache>
  </cache-container>

Default UDP jgroups xml as in the packaged example:默认 UDP jgroups xml 如打包示例所示:

..... ......

<UDP
        mcast_addr="${jgroups.udp.mcast_addr:x.x.x.x}"
        mcast_port="${jgroups.udp.mcast_port:46655}"
        bind_addr="${jgroups.bind.addr:y.y.y.y}"
        tos="8"
        ucast_recv_buf_size="200k"
        ucast_send_buf_size="200k"
        mcast_recv_buf_size="200k"
        mcast_send_buf_size="200k"
        max_bundle_size="64000"
        ip_ttl="${jgroups.udp.ip_ttl:2}"
        enable_diagnostics="false"
        bundler_type="old"
        thread_naming_pattern="pl"
        thread_pool.enabled="true"
        thread_pool.max_threads="30"
        />

The dummy cache entry:虚拟缓存条目:

public class CacheMemoryLeak implements Serializable {
    private static final long serialVersionUID = 1L;
    Date date = new Date();
}

An example usage from the "service": “服务”的示例用法:

Cache<String, Object> cache = cacheManager.getCache("myLeakyCache");
cache.put(key, new CacheMemoryLeak());

Some info / tryouts:一些信息/试用:

  • When there is only one node in the cluster or restarting them sequentially the references are getting cleared.当集群中只有一个节点或按顺序重新启动它们时,引用将被清除。
  • Enabling Max-idle shows the same behavior (makes sense expiration reaper is the same)启用 Max-idle 显示相同的行为(有意义的过期收割机是相同的)
  • Enabling eviction does not resolve the issue, just keeps the "expired" references count between the max limit.启用驱逐并不能解决问题,只是将“过期”引用计数保持在最大限制之间。 In case this is reached pretty fast, random eviction is happening on the live entries as well(default remove strategy)!!万一达到这个速度非常快,实时条目也会发生随机驱逐(默认删除策略)!
  • If i change the Cache entry to be a native String, then, the infinispan.MortalCacheEntries are getting removed from the heap space on the next GC cycle, upon getting expired and marked from expiration reaper, compared to the custom object!!如果我将缓存条目更改为本机字符串,那么与自定义对象相比,infinispan.MortalCacheEntries 将在下一个 GC 周期从堆空间中删除,在过期并从过期收割机中标记!
  • Enabling the expiration reaper only in one node didn't resolve the issue, and might break the failover mechanism.仅在一个节点中启用过期收割机并不能解决问题,并且可能会破坏故障转移机制。
  • Upgraded to infinispan 10.1.8 Final, but faced the same issue.升级到 infinispan 10.1.8 Final,但面临同样的问题。

As it seems noone else had the same issue or using primitive objects as cache entries, thus haven't noticed the issue.因为似乎没有其他人有同样的问题或使用原始对象作为缓存条目,因此没有注意到这个问题。 Upon replicating and fortunately traced the root cause, the below points are coming up:在复制并幸运地追查到根本原因后,出现了以下几点:

  • Always implement Serializable / hashCode / equals for custom objects that are going to end been transmitted through a replicated/synchronized cache.始终为将通过复制/同步缓存传输的自定义对象实现 Serializable / hashCode / equals
  • Never put primitive arrays, as the hashcode / equals would not be calculated - efficiently-切勿放置原始 arrays,因为不会计算hashcode码/ equals - 有效 -
  • Dont enable eviction with remove strategy on replicated caches, as upon reaching the maximum limit, the entries are getting removed randomly - based on TinyLFU - and not based on the expired timer and never getting removed from the JVM heap.不要在复制缓存上使用删除策略启用驱逐,因为在达到最大限制时,条目会被随机删除 - 基于TinyLFU - 而不是基于过期的计时器,并且永远不会从 JVM 堆中删除。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM