简体   繁体   English

如何在 Kubernetes 中使用 Apache ActiveMQ Artemis

[英]How to use Apache ActiveMQ Artemis in Kubernetes

I have an issue where I have a workload in Kubernetes which contains an Apache ActiveMQ Artemis broker.我有一个问题,我在 Kubernetes 中有一个工作负载,其中包含一个 Apache ActiveMQ Artemis 代理。 The server starts properly when I have a single pod in my workload, the issue starts when I try to scale them.当我的工作负载中只有一个 pod 时,服务器会正​​常启动,当我尝试扩展它们时,问题就开始了。 The brokers in the pods can't connect to each other, so I can't scale my work load. Pod 中的代理无法相互连接,因此我无法扩展我的工作负载。 My final goal is to make it scalable.我的最终目标是使其可扩展。 I tried it locally with two docker containers and it worked fine.我在本地使用两个 docker 容器进行了尝试,效果很好。

Here is my broker.xml :这是我的broker.xml

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

        <name>Broker1</name>

        <broadcast-groups>
            <broadcast-group name="brokerCluster-broadcast">
                <local-bind-address>0.0.0.0</local-bind-address>
                <local-bind-port>10000</local-bind-port>
                <group-address>231.7.7.7</group-address>
                <group-port>9876</group-port>
                <broadcast-period>20</broadcast-period>
                <connector-ref>netty-connector</connector-ref>
            </broadcast-group>
        </broadcast-groups>

        <discovery-groups>
            <discovery-group name="brokerCluster-discovery">
                <local-bind-port>10000</local-bind-port>
                <local-bind-address>0.0.0.0</local-bind-address>
                <group-address>231.7.7.7</group-address>
                <group-port>9876</group-port>
                <refresh-timeout>10</refresh-timeout>
            </discovery-group>
        </discovery-groups>

        <cluster-connections>
            <cluster-connection name="brokerCluster">
                <connector-ref>netty-connector</connector-ref>
                <retry-interval>500</retry-interval>
                <use-duplicate-detection>true</use-duplicate-detection>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <discovery-group-ref discovery-group-name="brokerCluster-discovery"/>
            </cluster-connection>
        </cluster-connections>

        <connectors>
            <connector name="netty-connector">tcp://0.0.0.0:61610</connector>
        </connectors>

        <persistence-enabled>true</persistence-enabled>

        <journal-type>NIO</journal-type>

        <paging-directory>data/paging</paging-directory>

        <bindings-directory>data/bindings</bindings-directory>

        <journal-directory>data/journal</journal-directory>

        <large-messages-directory>data/large-messages</large-messages-directory>

        <journal-datasync>true</journal-datasync>

        <journal-min-files>2</journal-min-files>

        <journal-pool-files>10</journal-pool-files>

        <journal-device-block-size>4096</journal-device-block-size>

        <journal-file-size>10M</journal-file-size>

        <journal-buffer-timeout>536000</journal-buffer-timeout>

        <disk-scan-period>5000</disk-scan-period>

        <max-disk-usage>90</max-disk-usage>

        <critical-analyzer>true</critical-analyzer>

        <critical-analyzer-timeout>120000</critical-analyzer-timeout>

        <critical-analyzer-check-period>60000</critical-analyzer-check-period>

        <critical-analyzer-policy>HALT</critical-analyzer-policy>


        <page-sync-timeout>536000</page-sync-timeout>

        <acceptors>

            <acceptor name="netty-acceptor">tcp://0.0.0.0:61610</acceptor>

            <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

            <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

            <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

            <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

            <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

        </acceptors>


        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"/>
                <permission type="deleteNonDurableQueue" roles="amq"/>
                <permission type="createDurableQueue" roles="amq"/>
                <permission type="deleteDurableQueue" roles="amq"/>
                <permission type="createAddress" roles="amq"/>
                <permission type="deleteAddress" roles="amq"/>
                <permission type="consume" roles="amq"/>
                <permission type="browse" roles="amq"/>
                <permission type="send" roles="amq"/>
                <permission type="manage" roles="amq"/>
            </security-setting>
        </security-settings>

        <address-settings>
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <redistribution-delay>0</redistribution-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
        </address-settings>

        <addresses>
            <address name="DLQ">
                <anycast>
                    <queue name="DLQ" />
                </anycast>
            </address>
            <address name="ExpiryQueue">
                <anycast>
                    <queue name="ExpiryQueue" />
                </anycast>
            </address>
            <address name="TestQueue">
                <anycast>
                    <queue name="testQueue" />
                </anycast>
            </address>
        </addresses>
    </core>
</configuration>

Edit: Attached kubernetes,docker configs编辑:附加的 kubernetes、docker 配置

deployment.yml部署.yml

apiVersion: v1
kind: Service
metadata:
  name: artemis
  labels:
    app: artemis
spec:
  ports:
  - port: 6161
    name: service
    protocol: UDP
  - port: 8161
    name: console
    protocol: UDP
  - port: 9876
    name: broadcast
    protocol: UDP
  - port: 61610
    name: netty-connector
    protocol: TCP
  - port: 5672
    name: acceptor-amqp
    protocol: TCP
  - port: 61613
    name: acceptor-stomp
    protocol: TCP
  - port: 5445
    name: accep-hornetq
    protocol: TCP
  - port: 1883
    name: acceptor-mqt
    protocol: TCP
  - port: 10000
    protocol: UDP
    name: brokercluster-broadcast // this name is invalid but i wanted to match it to my broker.xml
  clusterIP: None
  selector:
    app: artemis01
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: artemis01headless
  namespace: artemis
spec:
  selector:
    matchLabels:
      app: artemis01 
  serviceName: artemis01
  replicas: 3
  template:
    metadata:
      labels:
        app: artemis01 
    spec:
      affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: app
                  operator: In
                  values:
                  - worker
      containers:
        - env:
          - name: ARTEMIS_PASSWORD
            value: admin
          - name: ARTEMIS_USER
            value: admin
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          name: artemis
          image: 
          ports:
          - containerPort: 6161
            name: service
            protocol: UDP
          - containerPort: 8161
            name: console
            protocol: UDP
          - containerPort: 9876
            name: broadcast
            protocol: UDP
          - containerPort: 61610
            name: netty-connector
            protocol: TCP
          - containerPort: 5672
            name: acceptor-amqp
            protocol: TCP
          - containerPort: 61613
            name: acceptor-stomp
            protocol: TCP
          - containerPort: 5445
            name: accep-hornetq
            protocol: TCP
          - containerPort: 1883
            name: acceptor-mqtt
            protocol: TCP
          - containerPort: 10000
            name: brokercluster-broadcast
            protocol: UDP
      imagePullSecrets:
        - name: xxxxxxx

Dockerfile source Dockerfile源码

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# ActiveMQ Artemis

FROM jboss/base-jdk:8
LABEL maintainer="Apache ActiveMQ Team"
# Make sure pipes are considered to determine success, see: https://github.com/hadolint/hadolint/wiki/DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /opt

ENV ARTEMIS_USER artemis
ENV ARTEMIS_PASSWORD artemis
ENV ANONYMOUS_LOGIN false
ENV CREATE_ARGUMENTS --user ${ARTEMIS_USER} --password ${ARTEMIS_PASSWORD} --silent --http-host 0.0.0.0 --relax-jolokia

USER root

# add user and group for artemis
RUN groupadd -g 1001 -r artemis && useradd -r -u 1001 -g artemis artemis \
 && yum install -y libaio && yum -y clean all

USER artemis

ADD . /opt/activemq-artemis

# Web Server
EXPOSE 8161 \
    61610 \
    9876 \
    61613 \
    61616 \
    5672 \
    5445 \
    1883 \
    10000

USER root

RUN mkdir /var/lib/artemis-instance && chown -R artemis.artemis /var/lib/artemis-instance

COPY ./docker/docker-run.sh /

USER artemis

# Expose some outstanding folders
VOLUME ["/var/lib/artemis-instance"]
WORKDIR /var/lib/artemis-instance

ENTRYPOINT ["/docker-run.sh"]
CMD ["run"]

run.sh运行文件

set -e

BROKER_HOME=/var/lib/
CONFIG_PATH=$BROKER_HOME/etc
export BROKER_HOME OVERRIDE_PATH CONFIG_PATH

echo CREATE_ARGUMENTS=${CREATE_ARGUMENTS}

if ! [ -f ./etc/broker.xml ]; then
    /opt/activemq-artemis/bin/artemis create ${CREATE_ARGUMENTS} .
    #the script copies my broker.xml to /var/lib/artemis-instance/etc/broker.xml here.
    sed -i -e 's|$PLACEHOLDERIP|'$MY_POD_IP'|g' /var/lib/artemis-instance/etc/broker.xml
else
    echo "broker already created, ignoring creation"
fi

exec ./bin/artemis "$@"

I believe the issue is with your connector configuration.我相信问题出在您的connector配置上。 This is what you're using:这是你正在使用的:

<connector name="netty-connector">tcp://0.0.0.0:61610</connector>

The information from this connector gets broadcast to the other cluster members since you've specified it in the <connector-ref> of your <cluster-connection> .由于您已在<cluster-connection><connector-ref>中指定了此连接器的信息,因此它会广播给其他集群成员。 The other cluster members then try to use this information to connect back to the node that broadcast it.其他集群成员然后尝试使用此信息连接回广播它的节点。 However, 0.0.0.0 won't make sense to a remote client.但是, 0.0.0.0 对远程客户端没有意义。

The address 0.0.0.0 is a meta-address.地址 0.0.0.0 是元地址。 In the context of a listener (eg an Artemis acceptor ) it means that the listener will listen for connections on all local addresses.在侦听器(例如 Artemis acceptor )的上下文中,这意味着侦听器将侦听所有本地地址上的连接。 In the context of a connector it doesn't really have a meaning.connector的上下文中,它并没有真正的意义。 See this article for more about 0.0.0.0 .有关 0.0.0.0 的更多信息,请参阅本文

You should be using a real IP address or hostname that a client can use to actually get a network route to the server.您应该使用真实的IP 地址或主机名,客户端可以使用它们来实际获取到服务器的网络路由。

Also, since you're using UDP multicast (ie via the <broadcast-group> and <discovery-group> ) please ensure this functions as well between the containers/pods.此外,由于您正在使用 UDP 多播(即通过<broadcast-group><discovery-group> ),请确保容器/Pod 之间也能实现此功能。 If you can't get UDP multicast working in your environment (or simply don't want to) you could switch to a static cluster configuration.如果您无法在您的环境中使用 UDP 多播(或者根本不想),您可以切换到静态集群配置。 Refer to the documentation and the "clustered static discovery" example for details on how to configure this.有关如何配置的详细信息,请参阅文档“集群静态发现”示例

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM