简体   繁体   中英

Artemis cluster - load balancing with multiple acceptors

I have an Artemis cluster of 4 nodes (2 masters, 2 backups). Each one of the brokers has 2 acceptors - one for core protocol and one for stomp protocol (as stomp needs the prefix property). So they have different ports.

When I am connecting with the cluster from the Spring Boot app with jms-client 2.x and ConnectionFactory the addresses and messages are load balanced between the nodes. But when I try to interact with a stomp client it does not load balance at all. It seems that the cluster connections are not recognized somehow. I am not sure what the problem might be.

The documentation says that messages are load balanced over cluster connections:

These cluster connections allow messages to flow between the nodes of the cluster to balance load.

So maybe I need some more cluster connections and connectors, which are configured in the broker.xml ?

I have one STOMP client which connects to the first master node with port 61613. I can consume a message from the other node when I send it to the first master node, and I can see that the addresses are created on both nodes. One is like in a passive mode with a rack-wheel and one active with a folder symbol, which can be expanded. The addresses, which are created from the application are each just on one node.

The following shows snippets of the broker configs for one master and one backup broker:

master:

<connectors>
   <connector name="netty-connector">tcp://localhost:61616</connector>
</connectors>

<acceptors>
   <acceptor name="artemis">tcp://localhost:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
   <acceptor name="stomp">tcp://localhost:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;anycastPrefix=/queue/;multicastPrefix=/topic/</acceptor>
</acceptors>


<!-- failover config -->
<ha-policy>
   <replication>
      <master>
         <check-for-live-server>true</check-for-live-server>
      </master>
   </replication>   
</ha-policy>

<broadcast-groups>
   <broadcast-group name="my-broadcast-group">
      <broadcast-period>5000</broadcast-period>
      <jgroups-file>test-jgroups-jdbc_ping.xml</jgroups-file>
      <jgroups-channel>active_broadcast_channel</jgroups-channel>
      <connector-ref>netty-connector</connector-ref>
   </broadcast-group>
</broadcast-groups>

<discovery-groups>
   <discovery-group name="my-discovery-group">
      <jgroups-file>test-jgroups-jdbc_ping.xml</jgroups-file>
      <jgroups-channel>active_broadcast_channel</jgroups-channel>
      <refresh-timeout>10000</refresh-timeout>
   </discovery-group>
</discovery-groups>

<cluster-connections>
   <cluster-connection name="my-cluster">
      <connector-ref>netty-connector</connector-ref>
      <retry-interval>500</retry-interval>
      <use-duplicate-detection>true</use-duplicate-detection>
      <message-load-balancing>ON_DEMAND</message-load-balancing>
      <max-hops>1</max-hops>
      <discovery-group-ref discovery-group-name="my-discovery-group"/>
   </cluster-connection>       
</cluster-connections>

backup:

<connectors>
   <connector name="netty-connector">tcp://localhost:61617</connector>
   <connector name="server1-netty-live-connector">tcp://localhost:61616</connector>  
</connectors>

<acceptors>
   <acceptor name="artemis">tcp://localhost:61617?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
   <acceptor name="stomp">tcp://localhost:61614?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;anycastPrefix=/queue/;multicastPrefix=/topic/</acceptor>
</acceptors>

<cluster-user>user</cluster-user>
<cluster-password>pw</cluster-password>

<!-- failover config -->
<ha-policy>
   <replication>
      <slave>
         <allow-failback>true</allow-failback>
      </slave>
   </replication>
</ha-policy>

<broadcast-groups>
   <broadcast-group name="my-broadcast-group">
      <broadcast-period>5000</broadcast-period>
      <jgroups-file>test-jgroups-jdbc_ping.xml</jgroups-file>
      <jgroups-channel>active_broadcast_channel</jgroups-channel>
      <connector-ref>netty-connector</connector-ref>
   </broadcast-group>
</broadcast-groups>

<discovery-groups>
   <discovery-group name="my-discovery-group">
      <jgroups-file>test-jgroups-jdbc_ping.xml</jgroups-file>
      <jgroups-channel>active_broadcast_channel</jgroups-channel>
      <refresh-timeout>10000</refresh-timeout>
   </discovery-group>
</discovery-groups>

<cluster-connections>
   <cluster-connection name="my-cluster">
      <connector-ref>netty-connector</connector-ref>
      <retry-interval>500</retry-interval>
      <use-duplicate-detection>true</use-duplicate-detection>
      <message-load-balancing>ON_DEMAND</message-load-balancing>
      <max-hops>1</max-hops>
      <discovery-group-ref discovery-group-name="my-discovery-group"/>
   </cluster-connection>
</cluster-connections>

Can anyone help?

Based on all the information you've provided so far everything seems to be working fine. The documentation you cited says:

These cluster connections allow messages to flow between the nodes of the cluster to balance load.

And when you send a STOMP message to one node you are able to consume it from the other which means that messages are flowing over the cluster connection between nodes on demand to balance load.

You don't need any additional cluster connections or connectors.

To be clear, each broker in the cluster will have its own set of addresses, queues, and messages depending on what the clients connected to them are doing. You shouldn't necessarily expect to see all the same addresses or queues on all the different nodes of the cluster - especially if you are relying on automatic creation of addresses and queues rather than pre-configuring them in broker.xml .

That said, you will see some different behaviors from applications using the JMS client (eg your Spring app) and applications using STOMP. This is because the STOMP protocol doesn't define anything related to advanced concepts like connection load balancing, failover, etc. STOMP is a very simple protocol and clients are usually quite simple as well. Furthermore, Spring applications typically create multiple connections. Those connections be balanced across the nodes in the cluster in a round-robin fashion which is almost certainly why those related addresses and queues appear on all the nodes and the ones for your single STOMP client do not. Client-side connection load-balancing is discussed further in the documentation .

Messages are distributed by the nodes themselves independent of the protocol used. That's the whole purpose of the cluster connections - to forward messages to other nodes.

Client connections can't be automatically distributed by the nodes themselves because that would require redirects and not all protocols (eg STOMP) support those semantics.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM