简体   繁体   English

ActiveMQ Artemis Kubernetes 多代理设置

[英]ActiveMQ Artemis Kubernetes multi broker setup

I am trying to setup ActiveMQ Artemis multi broker setup in Kubernetes environment.我正在尝试在 Kubernetes 环境中设置 ActiveMQ Artemis 多代理设置。 I am able to run single pod deployments with persistence enabled successfully.我能够在成功启用持久性的情况下运行单个 pod 部署。 I used the Artemis docker image built from the official repo .我使用了从官方 repo构建的 Artemis docker 图像。

But if I try to setup a multi-pod deployment with same persistence volume attached (shared PV), although pods gets deployed, one pod will be successful and other pods will crash because the first Artemis container has made file lock on the directory.但是,如果我尝试设置附加相同持久性卷(共享 PV)的多 pod 部署,尽管部署了 pod,但一个 pod 会成功,而其他 pod 会崩溃,因为第一个 Artemis 容器已在目录上设置了文件锁定。 So I am unable to bring up multiple pod with shared storage.所以我无法使用共享存储启动多个 pod。

I also tried JGroups and broadcast concepts to create cluster so that each broker has their own storage and then communicate to each broker internally, but I was not able to configure it successfully.我还尝试了 JGroups 和广播概念来创建集群,以便每个代理都有自己的存储,然后在内部与每个代理进行通信,但我无法成功配置它。

Has anyone been able to successfully deploy multi-broker Artemis in Kubernetes?有没有人能够在 Kubernetes 中成功部署多代理 Artemis? There is no issue if each pod has their own storage, but there should be high availabilty for Artemis broker and brokers should communicate like in cluster so that we would not lose messages.如果每个 pod 都有自己的存储,则没有问题,但 Artemis 代理应该具有高可用性,并且代理应该像在集群中一样进行通信,这样我们就不会丢失消息。

It would be really helpful if anyone can share resources or steps about how to achieve this.如果任何人都可以分享有关如何实现这一目标的资源或步骤,那将非常有帮助。

Edit编辑

<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>${name}</name>

${jdbc}
      <persistence-enabled>${persistence-enabled}</persistence-enabled>

                <connectors>
      <connector name="netty-connector">tcp://${ipv4addr:localhost}:61618</connector>
    </connectors>

            <broadcast-groups>
      <broadcast-group name="cluster-broadcast-group">
        <broadcast-period>5000</broadcast-period>
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <connector-ref>netty-connector</connector-ref>
      </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
      <discovery-group name="cluster-discovery-group">
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <refresh-timeout>10000</refresh-timeout>
      </discovery-group>
    </discovery-groups>

    <cluster-connections>
      <cluster-connection name="artemis-cluster">
        <connector-ref>netty-connector</connector-ref>
        <retry-interval>500</retry-interval>
        <use-duplicate-detection>true</use-duplicate-detection>
        <message-load-balancing>STRICT</message-load-balancing>
        <!-- <address>jms</address> -->
        <max-hops>1</max-hops>
        <discovery-group-ref discovery-group-name="cluster-discovery-group"/>
        <!-- <forward-when-no-consumers>true</forward-when-no-consumers> -->
      </cluster-connection>
    </cluster-connections>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>${journal.settings}</journal-type>

      <paging-directory>${data.dir}/paging</paging-directory>

      <bindings-directory>${data.dir}/bindings</bindings-directory>

      <journal-directory>${data.dir}/journal</journal-directory>

      <large-messages-directory>${data.dir}/large-messages</large-messages-directory>

      ${journal-retention}

      <journal-datasync>${fsync}</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>10</journal-pool-files>

      <journal-device-block-size>${device-block-size}</journal-device-block-size>

      <journal-file-size>10M</journal-file-size>
      ${journal-buffer.settings}${ping-config.settings}${connector-config.settings}

      <!-- how often we are looking for how many bytes are being used on the disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>

      ${page-sync.settings}

      ${global-max-section}
      <acceptors>
            
            <acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>

         <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
         <!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
                                      as duplicate detection requires applicationProperties to be parsed on the server. -->
         <!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
                                       default: 102400, -1 would mean to disable large mesasge control -->

         <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->


         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://${host}:${default.port}?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=${support-advisory};suppressInternalManagementObjects=${suppress-internal-management-objects}</acceptor>
${amqp-acceptor}${stomp-acceptor}${hornetq-acceptor}${mqtt-acceptor}
      </acceptors>

${cluster-security.settings}${cluster.settings}${replicated.settings}${shared-store.settings}
      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="${role}"/>
            <permission type="deleteNonDurableQueue" roles="${role}"/>
            <permission type="createDurableQueue" roles="${role}"/>
            <permission type="deleteDurableQueue" roles="${role}"/>
            <permission type="createAddress" roles="${role}"/>
            <permission type="deleteAddress" roles="${role}"/>
            <permission type="consume" roles="${role}"/>
            <permission type="browse" roles="${role}"/>
            <permission type="send" roles="${role}"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="${role}"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>${full-policy}</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
                 
                 
                 
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>${full-policy}</address-full-policy>
            <auto-create-queues>${auto-create}</auto-create-queues>
            <auto-create-addresses>${auto-create}</auto-create-addresses>
            <auto-create-jms-queues>${auto-create}</auto-create-jms-queues>
            <auto-create-jms-topics>${auto-create}</auto-create-jms-topics>
            <auto-delete-queues>${auto-delete}</auto-delete-queues>
            <auto-delete-addresses>${auto-delete}</auto-delete-addresses>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>${address-queue.settings}
      </addresses>


     
      <broker-plugins>
         <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>


   </core>
</configuration>

This is my broker.xml configuration.这是我的经纪人。xml 配置。

<config xmlns="urn:org:jgroups"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">

  <TCP
    enable_diagnostics="true"
    bind_addr="match-interface:eth0,lo"
    bind_port="7800"
    recv_buf_size="20000000"
    send_buf_size="640000"
    max_bundle_size="64000"
    max_bundle_timeout="30"
    sock_conn_timeout="300"

    thread_pool.enabled="true"
    thread_pool.min_threads="1"
    thread_pool.max_threads="10"
    thread_pool.keep_alive_time="5000"
    thread_pool.queue_enabled="false"
    thread_pool.queue_max_size="100"
    thread_pool.rejection_policy="run"

    oob_thread_pool.enabled="true"
    oob_thread_pool.min_threads="1"
    oob_thread_pool.max_threads="8"
    oob_thread_pool.keep_alive_time="5000"
    oob_thread_pool.queue_enabled="true"
    oob_thread_pool.queue_max_size="100"
    oob_thread_pool.rejection_policy="run"
  />

  <!-- <TRACE/> -->

  <org.jgroups.protocols.kubernetes.KUBE_PING
    namespace="${KUBERNETES_NAMESPACE:default}"
    labels="${KUBERNETES_LABELS:app=custom-artemis-service}"
  />

  <MERGE3 min_interval="10000" max_interval="30000"/>
  <FD_SOCK/>
  <FD timeout="10000" max_tries="5" />
  <VERIFY_SUSPECT timeout="1500" />
  <BARRIER />
  <pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
  <UNICAST3
    xmit_table_num_rows="100"
    xmit_table_msgs_per_row="1000"
    xmit_table_max_compaction_time="30000"
  />
  <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
  <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
  <FC max_credits="2000000" min_threshold="0.10"/>
  <FRAG2 frag_size="60000" />
  <pbcast.STATE_TRANSFER/>
  <pbcast.FLUSH timeout="0"/>

</config>

This is jgroups.xml I used.这是我使用的 jgroups.xml。

I used this config to setup multi-pods setup in k8s.I added relevant Kube ping jars in the lib folder.Although two pods were up, when I tried to access the Artemis UI,there were inconsistent behaviour.After logins,user lands on a UI page where asked to add connections.Sometimes even after successfule login,user is redirected to login page.User is not getting the UI usually gets when there is a single broker.我使用此配置在 k8s 中设置多 pod 设置。我在 lib 文件夹中添加了相关的 Kube ping jars。虽然两个 pod 已启动,但当我尝试访问 Artemis UI 时,行为不一致。登录后,用户登陆要求添加连接的 UI 页面。有时即使在成功登录后,用户也会被重定向到登录页面。当只有一个代理时,用户通常不会获得 UI。 I donot see any error logs too.我也没有看到任何错误日志。 Can anyone recommend the broker xml changes needed for kuberenetes deployment?谁能推荐 kuberenetes 部署所需的代理 xml 更改?

ArtemisCloud.io proposes a solution with an operator to deploy an ActiveMQ Artemis Kubernetes multi broker setup, see https://artemiscloud.io/blog/using_operator/ https://artemiscloud.io/documentation/operator/deploying-brokers-operator.html ArtemisCloud.io proposes a solution with an operator to deploy an ActiveMQ Artemis Kubernetes multi broker setup, see https://artemiscloud.io/blog/using_operator/ https://artemiscloud.io/documentation/operator/deploying-brokers-operator. html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 ActiveMQ 代理 url 上 Kubernetes - ActiveMQ Broker url on Kubernetes 如何在 Kubernetes 中使用 Apache ActiveMQ Artemis - How to use Apache ActiveMQ Artemis in Kubernetes 如果 ActiveMQ Artemis 集群是 Kubernetes StatefulSet,HA 真的有用吗? - Is HA really useful if ActiveMQ Artemis cluster is a Kubernetes StatefulSet? 如何在kubernetes上为Kafka多代理设置指定广告监听器并在经验上公开集群? - How to specify advertised listeners for Kafka multi broker setup on kubernetes and expose the cluster expernally? Kubernetes Multi Master设置 - Kubernetes Multi Master setup 如果 ActiveMQ Artemis 集群在 Kubernetes 环境中运行,备份节点是否需要 ON_DEMAND 负载平衡才能运行? - If ActiveMQ Artemis cluster is running in a Kubernetes environment, is the ON_DEMAND load-balancing required for a backup-node to operate? ActiveMQ“经典”和 ActiveMQ Artemis 与 Prometheus 的集成 - ActiveMQ "Classic" and ActiveMQ Artemis integration with Prometheus Kubernetes 问题中的 Artemis 复制 - Artemis Replication In Kubernetes Issues Kubernetes Docker多节点设置问题 - Kubernetes Docker Multi Node Setup issues ActiveMQ Artemis:内部和外部 IP 地址 - ActiveMQ Artemis: Internal and external IP addresses
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM