[英]Infinispan Timeout Exception with JBoss EAP 6.1
this one is driving me crazy.这让我发疯。 I am getting exceptions like this one at random points:
我在随机点收到这样的例外:
ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [60 seconds] on key [com.acme.entity.EntityA#9073] for requestor [GlobalTransaction:<null>:9593:local]! Lock held by [GlobalTransaction:<null>:9580:local]
The setup is fairly old: JBoss EAP 6.1 with Infinispan 5.2.6 for the 2nd level cache.设置相当陈旧:JBoss EAP 6.1 和 Infinispan 5.2.6 用于二级缓存。 There can be multiple JBoss server which run as standalone instances (not configured to be clustered) but they use SQLProxy and a percona MySQL cluster.
可以有多个 JBoss 服务器作为独立实例运行(未配置为集群),但它们使用 SQLProxy 和 percona MySQL 集群。 (However we have seen the same problem on a single JBoss instance with one DB on the same server).
(但是,我们在同一服务器上有一个数据库的单个 JBoss 实例上看到了同样的问题)。
The configuration in standalone.xml is currently this: Standalone.xml 中的配置目前是这样的:
<subsystem xmlns="urn:jboss:domain:infinispan:1.5">
<cache-container name="web" aliases="standard-session-cache" default-cache="local-web" module="org.jboss.as.clustering.web.infinispan">
<local-cache name="local-web" batching="true">
<file-store passivation="false" purge="false"/>
</local-cache>
</cache-container>
<cache-container name="hibernate" default-cache="local-query" module="org.jboss.as.jpa.hibernate:4">
<local-cache name="entity">
<locking isolation="READ_COMMITTED" acquire-timeout="60000"/>
<transaction mode="NON_XA"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<locking isolation="READ_COMMITTED" acquire-timeout="60000"/>
<transaction mode="NONE"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps">
<transaction mode="NONE"/>
<eviction strategy="NONE"/>
</local-cache>
</cache-container>
</subsystem>
We have tried different timeout values without any success.我们尝试了不同的超时值,但没有成功。 We have recently changed the isolation from the default REPEATABLE_READ (in JBoss 6.1) to READ_COMMITTED (which seems the default for Infinispan 5.2.6 and was changed to the default in WildFly 9) see also https://developer.jboss.org/thread/243458 .
我们最近将隔离从默认的 REPEATABLE_READ(在 JBoss 6.1 中)更改为 READ_COMMITTED(这似乎是 Infinispan 5.2.6 的默认值,并在 WildFly 9 中更改为默认值)另见https://developer.jboss.org/thread /243458 。 I was hoping that that would fix the problem but we are still seeing these Timeout exceptions.
我希望这能解决问题,但我们仍然看到这些超时异常。
For the entities that use 2nd level cache we use this on the entities:对于使用二级缓存的实体,我们在实体上使用它:
@Cache(usage = CacheConcurrencyStrategy.TRANSACTIONAL,region="cache.StandardEntity")
public class EntityA
And in our infinispan.xml file we have:在我们的 infinispan.xml 文件中,我们有:
<namedCache name="Acme.cache.StandardEntity">
<eviction strategy="LRU" maxEntries="1000"/>
<expiration maxIdle="3600" lifespan="3600" wakeUpInterval="7200000"/>
</namedCache>
The infispan.xml file was created by a tool when we migrated from EH cache a few years ago. infispan.xml 文件是几年前我们从 EH 缓存迁移时由工具创建的。 The wakeUpInterval looks rather high.
唤醒间隔看起来相当高。 Could this be a problem?
这可能是个问题吗? I am not actually sure whether the namedCache is configured correctly.
我实际上不确定是否正确配置了 namedCache。 Does it need to be prefixed with Acme (the name of the app in this case?).
它是否需要以 Acme 为前缀(在这种情况下是应用程序的名称?)。 How can I test that these named Caches are actually used.
我如何测试这些命名的缓存是否被实际使用。 I am a bit confused about what needs to be configured in the standalone.xml file and what in the infinispan.xml file.
我对standalone.xml 文件中需要配置的内容和infinispan.xml 文件中的内容有些困惑。
Are there still things in the configuration that I could try to fix this problem?配置中是否还有我可以尝试解决此问题的内容?
If not how can I figure out who 'locked the door' (acquired the lock).如果不是,我怎么能找出谁“锁了门”(获得了锁)。 I can see the threads trying to open the door (trying to acquire the lock) and complaining (the exception is thrown) but I cannot see who locked it in the first place.
我可以看到线程试图打开门(试图获取锁)和抱怨(抛出异常),但我看不出是谁锁的。 If I can see who is holding the lock for so long I might be able to fix it.
如果我能看到谁持有锁这么长时间,我也许可以修复它。
Locally I can enable logging for this在本地我可以为此启用日志记录
<logger category="org.infinispan.util.concurrent.locks">
<level name="TRACE"/>
</logger>
but I can't really do this in production (too much logging).但我不能在生产中真正做到这一点(太多的日志记录)。 However locally I cannot reproduce it.
但是在本地我无法重现它。 Any other idea how I could find out which thread acquired the lock?
任何其他想法如何找出哪个线程获得了锁?
Any ideas are appreciated.任何想法表示赞赏。
Thanks!谢谢!
It seems you are having concurrent access problem on the lock.看来您在锁上遇到了并发访问问题。 I would suggest pessimistic locking to avoid this problem, it increases the resources but it helps on those cases:
我建议使用悲观锁定来避免这个问题,它增加了资源,但对这些情况有帮助:
/subsystem=infinispan/cache-container=web/local-cache=persistent/transaction=TRANSACTION/:add(locking=PESSMISTIC)
On regards to infinispan.xml
: this file is only for Red Hat Data GRid (JDG, RHDG) embedded mode (or the community Infinispan version).关于
infinispan.xml
:此文件仅适用于 Red Hat Data GRid(JDG、RHDG)嵌入模式(或社区 Infinispan 版本)。 For EAP 6.1 you need to use the standalone-ha
or standalone-full-ha
profiles to set the cache properties.对于 EAP 6.1,您需要使用
standalone-ha
或standalone-full-ha
配置文件来设置缓存属性。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.