簡體   English   中英

ActiveMQ Artemis Kubernetes 多代理設置

[英]ActiveMQ Artemis Kubernetes multi broker setup

我正在嘗試在 Kubernetes 環境中設置 ActiveMQ Artemis 多代理設置。 我能夠在成功啟用持久性的情況下運行單個 pod 部署。 我使用了從官方 repo構建的 Artemis docker 圖像。

但是,如果我嘗試設置附加相同持久性卷(共享 PV)的多 pod 部署,盡管部署了 pod,但一個 pod 會成功,而其他 pod 會崩潰,因為第一個 Artemis 容器已在目錄上設置了文件鎖定。 所以我無法使用共享存儲啟動多個 pod。

我還嘗試了 JGroups 和廣播概念來創建集群,以便每個代理都有自己的存儲,然后在內部與每個代理進行通信,但我無法成功配置它。

有沒有人能夠在 Kubernetes 中成功部署多代理 Artemis? 如果每個 pod 都有自己的存儲,則沒有問題,但 Artemis 代理應該具有高可用性,並且代理應該像在集群中一樣進行通信,這樣我們就不會丟失消息。

如果任何人都可以分享有關如何實現這一目標的資源或步驟,那將非常有幫助。

編輯

<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>${name}</name>

${jdbc}
      <persistence-enabled>${persistence-enabled}</persistence-enabled>

                <connectors>
      <connector name="netty-connector">tcp://${ipv4addr:localhost}:61618</connector>
    </connectors>

            <broadcast-groups>
      <broadcast-group name="cluster-broadcast-group">
        <broadcast-period>5000</broadcast-period>
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <connector-ref>netty-connector</connector-ref>
      </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
      <discovery-group name="cluster-discovery-group">
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <refresh-timeout>10000</refresh-timeout>
      </discovery-group>
    </discovery-groups>

    <cluster-connections>
      <cluster-connection name="artemis-cluster">
        <connector-ref>netty-connector</connector-ref>
        <retry-interval>500</retry-interval>
        <use-duplicate-detection>true</use-duplicate-detection>
        <message-load-balancing>STRICT</message-load-balancing>
        <!-- <address>jms</address> -->
        <max-hops>1</max-hops>
        <discovery-group-ref discovery-group-name="cluster-discovery-group"/>
        <!-- <forward-when-no-consumers>true</forward-when-no-consumers> -->
      </cluster-connection>
    </cluster-connections>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>${journal.settings}</journal-type>

      <paging-directory>${data.dir}/paging</paging-directory>

      <bindings-directory>${data.dir}/bindings</bindings-directory>

      <journal-directory>${data.dir}/journal</journal-directory>

      <large-messages-directory>${data.dir}/large-messages</large-messages-directory>

      ${journal-retention}

      <journal-datasync>${fsync}</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>10</journal-pool-files>

      <journal-device-block-size>${device-block-size}</journal-device-block-size>

      <journal-file-size>10M</journal-file-size>
      ${journal-buffer.settings}${ping-config.settings}${connector-config.settings}

      <!-- how often we are looking for how many bytes are being used on the disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>

      ${page-sync.settings}

      ${global-max-section}
      <acceptors>
            
            <acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>

         <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
         <!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
                                      as duplicate detection requires applicationProperties to be parsed on the server. -->
         <!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
                                       default: 102400, -1 would mean to disable large mesasge control -->

         <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->


         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://${host}:${default.port}?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=${support-advisory};suppressInternalManagementObjects=${suppress-internal-management-objects}</acceptor>
${amqp-acceptor}${stomp-acceptor}${hornetq-acceptor}${mqtt-acceptor}
      </acceptors>

${cluster-security.settings}${cluster.settings}${replicated.settings}${shared-store.settings}
      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="${role}"/>
            <permission type="deleteNonDurableQueue" roles="${role}"/>
            <permission type="createDurableQueue" roles="${role}"/>
            <permission type="deleteDurableQueue" roles="${role}"/>
            <permission type="createAddress" roles="${role}"/>
            <permission type="deleteAddress" roles="${role}"/>
            <permission type="consume" roles="${role}"/>
            <permission type="browse" roles="${role}"/>
            <permission type="send" roles="${role}"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="${role}"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>${full-policy}</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
                 
                 
                 
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>${full-policy}</address-full-policy>
            <auto-create-queues>${auto-create}</auto-create-queues>
            <auto-create-addresses>${auto-create}</auto-create-addresses>
            <auto-create-jms-queues>${auto-create}</auto-create-jms-queues>
            <auto-create-jms-topics>${auto-create}</auto-create-jms-topics>
            <auto-delete-queues>${auto-delete}</auto-delete-queues>
            <auto-delete-addresses>${auto-delete}</auto-delete-addresses>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>${address-queue.settings}
      </addresses>


     
      <broker-plugins>
         <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>


   </core>
</configuration>

這是我的經紀人。xml 配置。

<config xmlns="urn:org:jgroups"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">

  <TCP
    enable_diagnostics="true"
    bind_addr="match-interface:eth0,lo"
    bind_port="7800"
    recv_buf_size="20000000"
    send_buf_size="640000"
    max_bundle_size="64000"
    max_bundle_timeout="30"
    sock_conn_timeout="300"

    thread_pool.enabled="true"
    thread_pool.min_threads="1"
    thread_pool.max_threads="10"
    thread_pool.keep_alive_time="5000"
    thread_pool.queue_enabled="false"
    thread_pool.queue_max_size="100"
    thread_pool.rejection_policy="run"

    oob_thread_pool.enabled="true"
    oob_thread_pool.min_threads="1"
    oob_thread_pool.max_threads="8"
    oob_thread_pool.keep_alive_time="5000"
    oob_thread_pool.queue_enabled="true"
    oob_thread_pool.queue_max_size="100"
    oob_thread_pool.rejection_policy="run"
  />

  <!-- <TRACE/> -->

  <org.jgroups.protocols.kubernetes.KUBE_PING
    namespace="${KUBERNETES_NAMESPACE:default}"
    labels="${KUBERNETES_LABELS:app=custom-artemis-service}"
  />

  <MERGE3 min_interval="10000" max_interval="30000"/>
  <FD_SOCK/>
  <FD timeout="10000" max_tries="5" />
  <VERIFY_SUSPECT timeout="1500" />
  <BARRIER />
  <pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
  <UNICAST3
    xmit_table_num_rows="100"
    xmit_table_msgs_per_row="1000"
    xmit_table_max_compaction_time="30000"
  />
  <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
  <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
  <FC max_credits="2000000" min_threshold="0.10"/>
  <FRAG2 frag_size="60000" />
  <pbcast.STATE_TRANSFER/>
  <pbcast.FLUSH timeout="0"/>

</config>

這是我使用的 jgroups.xml。

我使用此配置在 k8s 中設置多 pod 設置。我在 lib 文件夾中添加了相關的 Kube ping jars。雖然兩個 pod 已啟動,但當我嘗試訪問 Artemis UI 時,行為不一致。登錄后,用戶登陸要求添加連接的 UI 頁面。有時即使在成功登錄后,用戶也會被重定向到登錄頁面。當只有一個代理時,用戶通常不會獲得 UI。 我也沒有看到任何錯誤日志。 誰能推薦 kuberenetes 部署所需的代理 xml 更改?

ArtemisCloud.io proposes a solution with an operator to deploy an ActiveMQ Artemis Kubernetes multi broker setup, see https://artemiscloud.io/blog/using_operator/ https://artemiscloud.io/documentation/operator/deploying-brokers-operator. html

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM