简体   繁体   English

骆驼ActiveMQ性能调整

[英]Camel ActiveMQ Performance Tuning

Situation 情况

At present, we use some custom code on top of ActiveMQ libraries for JMS messaging. 目前,我们在ActiveMQ库的顶部使用一些自定义代码进行JMS消息传递。 I have been looking at switching to Camel, for ease of use, ease of maintenance, and reliability. 我一直在考虑切换到Camel,以便于使用,维护和可靠性。

Problem 问题

With my present configuration, Camel's ActiveMQ implementation is substantially slower than our old implementation, both in terms of delay per message sent and received, and time taken to send and receive a large flood of messages. 使用我当前的配置,Camel的ActiveMQ实现比我们以前的实现要慢得多,无论是在发送和接收每个消息的延迟方面,还是在发送和接收大量消息方面都花费了很多时间。 I've tried tweaking some configuration (eg maximum connections), to no avail. 我试图调整一些配置(例如最大连接数),但无济于事。

Test Approach 测试方法

I have two applications, one using our old implementation, one using a Camel implementation. 我有两个应用程序,一个使用我们的旧实现,一个使用Camel实现。 Each application sends JMS messages to a topic on local ActiveMQ server, and also listens for messages on that topic. 每个应用程序将JMS消息发送到本地ActiveMQ服务器上的某个主题,并且还侦听该主题上的消息。 This is used to test two Scenarios: - Sending 100,000 messages to the topic in a loop, and seen how long it takes from start of sending to end of handling all of them. 这用于测试两种情况:-循环发送100,000条消息给该主题,并查看从开始发送到结束处理所有消息所花费的时间。 - Sending a message every 100 ms and measuring the delay (in ns) from sending to handling each message. -每100毫秒发送一条消息,并测量从发送到处理每条消息的延迟(以ns为单位)。

Question

Can I improve upon the implementation below, in terms of time sent to time processed for both floods of messages, and individual messages? 我可以在发送大量消息和单个消息所花费的时间与处理时间方面改进以下实现吗? Ideally, improvements would involve tweaking some config that I have missed, or suggesting a better way to do it, and not be too hacky. 理想情况下,改进将涉及调整我错过的一些配置,或者建议一种更好的方法来做到这一点,而且不要太过分。 Explanations of improvements would be appreciated. 改进的说明将不胜感激。

Edit: Now that I am sending messages asyncronously, I appear to have a concurrency issue. 编辑:现在,我正在异步发送消息,我似乎有一个并发问题。 receivedCount does not reach 100,000. receivedCount未达到100,000。 Looking at the ActiveMQ web interface, 100,000 messages are enqueued, and 100,000 dequeued, so it's probably a problem on the message processing side. 查看ActiveMQ Web界面,有100,000条消息入队,而100,000条出队,因此在消息处理方面可能是一个问题。 I've altered receivedCount to be an AtomicInteger and added some logging to aid debugging. 我已经改变receivedCount是一个AtomicInteger并增加了一些日志记录,以帮助调试。 Could this be a problem with Camel itself (or the ActiveMQ components), or is there something wrong with the message processing code? 这可能是骆驼本身(或ActiveMQ组件)出现问题,还是消息处理代码出了问题? As far as I can tell, only ~99,876 messages are making it through to floodProcessor.process . 据我所知,只有大约99,876条消息正在传递给floodProcessor.process

Test Implementation 测试实施

Edit: Updated with async sending and logging for concurrency issue. 编辑:更新了异步发送和记录并发问题。

import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;

import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.camel.component.ActiveMQComponent;
import org.apache.activemq.pool.PooledConnectionFactory;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jms.JmsConfiguration;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.log4j.Logger;

public class CamelJmsTest{
    private static final Logger logger = Logger.getLogger(CamelJmsTest.class);

    private static final boolean flood = true;
    private static final int NUM_MESSAGES = 100000;

    private final CamelContext context;
    private final ProducerTemplate producerTemplate;

    private long timeSent = 0;

    private final AtomicInteger sendCount = new AtomicInteger(0);
    private final AtomicInteger receivedCount = new AtomicInteger(0);

    public CamelJmsTest() throws Exception {
        context = new DefaultCamelContext();

        ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");

        PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);

        JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory);
        logger.info(jmsConfiguration.isTransacted());

        ActiveMQComponent activeMQComponent = ActiveMQComponent.activeMQComponent();
        activeMQComponent.setConfiguration(jmsConfiguration);

        context.addComponent("activemq", activeMQComponent);

        RouteBuilder builder = new RouteBuilder() {
            @Override
            public void configure() {
                Processor floodProcessor = new Processor() {
                    @Override
                    public void process(Exchange exchange) throws Exception {
                        int newCount = receivedCount.incrementAndGet();

                        //TODO: Why doesn't newCount hit 100,000? Remove this logging once fixed
                        logger.info(newCount + ":" + exchange.getIn().getBody());

                        if(newCount == NUM_MESSAGES){
                            logger.info("all messages received at " + System.currentTimeMillis());
                        }
                    }
                };

                Processor spamProcessor = new Processor() {
                    @Override
                    public void process(Exchange exchange) throws Exception {
                        long delay = System.nanoTime() - timeSent;

                        logger.info("Message received: " + exchange.getIn().getBody(List.class) + " delay: " + delay);
                    }
                };

                from("activemq:topic:test?exchangePattern=InOnly")//.threads(8) // Having 8 threads processing appears to make things marginally worse
                    .choice()
                        .when(body().isInstanceOf(List.class)).process(flood ? floodProcessor : spamProcessor)
                    .otherwise().process(new Processor() {
                        @Override
                        public void process(Exchange exchange) throws Exception {
                            logger.info("Unknown message type received: " + exchange.getIn().getBody());
                        }
                    });
            }
        };

        context.addRoutes(builder);

        producerTemplate = context.createProducerTemplate();
        // For some reason, producerTemplate.asyncSendBody requires an Endpoint to be passed in, so the below is redundant:
//      producerTemplate.setDefaultEndpointUri("activemq:topic:test?exchangePattern=InOnly");
    }

    public void send(){
        int newCount = sendCount.incrementAndGet();
        producerTemplate.asyncSendBody("activemq:topic:test?exchangePattern=InOnly", Arrays.asList(newCount));
    }

    public void spam(){
        Executors.newSingleThreadScheduledExecutor().scheduleWithFixedDelay(new Runnable() {
            @Override
            public void run() {
                timeSent = System.nanoTime();
                send();
            }
        }, 1000, 100, TimeUnit.MILLISECONDS);
    }

    public void flood(){
        logger.info("starting flood at " + System.currentTimeMillis());
        for (int i = 0; i < NUM_MESSAGES; i++) {
            send();
        }
        logger.info("flooded at " + System.currentTimeMillis());
    }

    public static void main(String... args) throws Exception {
        CamelJmsTest camelJmsTest = new CamelJmsTest();
        camelJmsTest.context.start();

        if(flood){
            camelJmsTest.flood();
        }else{
            camelJmsTest.spam();
        }
    }
}

It appears from your current JmsConfiguration that you are only consuming messages with a single thread. 从当前的JmsConfiguration ,您仅使用单个线程来使用消息。 Was this intended? 这是故意的吗?

If not, you need to set the concurrentConsumers property to something higher. 如果不是,则需要将concurrentConsumers属性设置为更高的值。 This will create a threadpool of JMS listeners to service your destination. 这将创建JMS侦听器的线程池以服务您的目的地。

Example: 例:

JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(10);

This will create 10 JMS listener threads that will process messages concurrently from your queue. 这将创建10个JMS侦听器线程,这些线程将同时处理来自队列的消息。

EDIT: 编辑:

For topics you can do something like this: 对于主题,您可以执行以下操作:

JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(1);
config.setMaxConcurrentConsumers(1);

And then in your route: 然后在您的路线中:

from("activemq:topic:test?exchangePattern=InOnly").threads(10)

Also, in ActiveMQ you can use a virtual destination . 另外,在ActiveMQ中,您可以使用虚拟目标 The virtual topic will act like a queue and then you can use the same concurrentConsumers method you would use for a normal queue. 虚拟主题将像一个队列一样工作,然后您可以使用与普通队列相同的currentConsumers方法。

Further Edit (For Sending): 进一步编辑(用于发送):

You are currently doing a blocking send. 您当前正在执行阻止发送。 You need to do producerTemplate.asyncSendBody() . 您需要执行producerTemplate.asyncSendBody()


Edit 编辑

I just built a project with your code and ran it. 我刚刚用您的代码构建了一个项目并运行了它。 I set a breakpoint in your floodProcessor method and newCount is reaching 100,000. 我在您的floodProcessor方法中设置了一个断点,而newCount达到100,000。 I think you may be getting thrown off by your logging and the fact that you are sending and receiving asynchronously. 我认为您可能会因为日志记录以及异步发送和接收消息而被淘汰。 On my machine newCount hit 100,000 and the "all messages recieved" message was logged in well under 1 second after execution, but the program continued to log for another 45 seconds afterwards since it was buffered. 在我的机器上,newCount达到100,000,执行后不到1秒就记录了"all messages recieved"消息,但是此后由于缓冲,该程序又继续记录了45秒。 You can see the effect of logging on how close your newCount number is to your body number by reducing the logging. 您可以通过减少日志记录来查看日志记录对newCount编号与您的身体编号的接近程度的影响。 I turned the logging to info , shutting off camel logging, and the two numbers matched at the end of the logging: 我将日志记录转为info ,关闭了骆驼日志记录,并且两个数字在日志记录末尾匹配:

INFO  CamelJmsTest - 99996:[99996]
INFO  CamelJmsTest - 99997:[99997]
INFO  CamelJmsTest - 99998:[99998]
INFO  CamelJmsTest - 99999:[99999]
INFO  CamelJmsTest - 100000:[100000]
INFO  CamelJmsTest - all messages received at 1358778578422

I took over from the original poster in looking at this as part of another task, and found the problem with losing messages was actually in the ActiveMQ config. 我从原始发布者那里接手,将其视为另一项任务的一部分,发现丢失消息的问题实际上是在ActiveMQ配置中。

We had a setting sendFailIfNoSpace=true, which was resulting in messages being dropped if we were sending fast enough to fill the publishers cache. 我们有一个设置sendFailIfNoSpace = true,如果我们发送速度足够快以填充发布者缓存,则会导致邮件被丢弃。 Playing around with the policyEntry topic cache size I could vary the number of messages that disappeared with as much reliability as can be expected of such a race condition. 尝试使用policyEntry主题高速缓存大小,我可以改变消失的消息的数量,并且具有与这种竞争条件一样高的可靠性。 Setting sendFailIfNoSpace=false (default), I could have any cache size I liked and never fail to receive all messages. 设置sendFailIfNoSpace = false(默认值),我可以拥有自己喜欢的任何缓存大小,并且永远不会收到所有消息。

In theory sendFailIfNoSpace should throw a ResourceAllocationException when it drops a message, but that is either not happening(!) or is being ignored somehow. 从理论上讲,sendFailIfNoSpace丢弃消息时应该抛出ResourceAllocationException,但这不是发生(!)还是以某种方式被忽略。 Also interesting is that our custom JMS wrapper code doesn't hit this problem despite running the throughput test faster than Camel. 同样有趣的是,尽管运行吞吐量测试的速度比Camel快,但是我们的自定义JMS包装器代码并未解决此问题。 Maybe that code is faster in such a way that it means the publishing cache is being emptied faster, or else we are overriding sendFailIfNoSpace in the connection code somewhere that I haven't found yet. 也许代码的速度更快,这意味着发布缓存的清空速度更快,否则我们将覆盖连接代码中尚未找到的sendFailIfNoSpace。

On the question of speed, we have implemented all the suggestions mentioned here so far except for virtual destinations, but the Camel version test with 100K messages still runs in 16 seconds on my machine compared to 10 seconds for our own wrapper. 关于速度问题,到目前为止,我们已经实施了这里提到的所有建议(虚拟目的地除外),但是在我的计算机上,带有100K消息的Camel版本测试仍在16秒内运行,而对于我们自己的包装器,则为10秒。 As mentioned above, I have a sneaking suspicion that we are (implicitly or otherwise) overriding config somewhere in our wrapper, but I doubt it is anything that would cause that big a performance boost within ActiveMQ. 如上所述,我有一个偷偷的怀疑,即我们(隐式或以其他方式)覆盖了包装中的某个配置,但是我怀疑这是否会导致ActiveMQ的性能大幅提升。

Virtual destinations as mentioned by gwithake might speed up this particular test, but most of the time with our real workloads it is not an appropriate solution. gwithake提到的虚拟目标可能会加快此特定测试的速度,但是在大多数情况下,考虑到我们的实际工作量,这不是一个合适的解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM