简体   繁体   English

卡桑德拉 - 种子八卦版本是-2147483648

[英]Cassandra - Seed gossip version is -2147483648

I've just noticed a weird scenario in my UAT environment. 我刚刚在UAT环境中发现了一个奇怪的场景。

I've got a 3 node cluster but I noticed this morning that nodes 2 and 3 think node 1 is dead. 我有一个3节点集群,但我今天早上注意到节点2和3认为节点1已经死了。 Node 1 however thinks everyone is alive. 然而,节点1认为每个人都还活着。

In the logs for 2 and 3 it says the following: 在2和3的日志中,它说明如下:

WARN  [MessagingService-Outgoing-/10.0.8.172] 2015-12-06 02:20:02,987 OutboundTcpConnection.java:423 - Seed gossip version is -2147483648; will not connect with that version

Also it appears node 1 is no longer listening on 9042. It is still listening on 7000 though. 此外,节点1似乎不再在9042上收听。但它仍然在监听7000。

worth noting I'm on Windows Server 2008 R2 and running Cassandra 2.2 值得注意的是我在Windows Server 2008 R2上运行Cassandra 2.2

Thanks 谢谢

Do node 2 and 3 know that node 1 exists and that it is down or do they not know that it exists at all? 节点2和3是否知道节点1存在并且它已关闭或者他们不知道它根本存在?

Have you checked the settings in your cassandra.yaml file? 你检查过cassandra.yaml文件中的设置了吗? Are you sure the windows firewall on node 1 is not blocking the ports? 您确定节点1上的Windows防火墙没有阻止端口吗? Look here: https://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureFireWall_r.html 请看这里: https//docs.datastax.com/en/cassandra/2.0/cassandra/security/secureFireWall_r.html

If they know that it exists but they think that it is down then I would look in the cassandra.yaml file and specifically look at the listen_address on node 1. 如果他们知道它存在但是他们认为它已经失效,那么我会查看cassandra.yaml文件并特别查看节点1上的listen_address。

To solve this, just set: 要解决这个问题,请设置:

Node 1: seeds = node1,node2 节点1:种子= node1,node2

Node 2: seeds = node2. 节点2:种子= node2。


1st start node 1 and then node 2! 第一个启动节点1,然后是节点2!

Fixed my problem! 解决了我的问题!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM