简体   繁体   English

多线程追加程序队列上的慢队列尾部

[英]Slow queue tailer on multi-threaded appenders queue

I have scenario where multiple threads are writing to the same queue. 我有多个线程正在写入同一队列的情况。

Appender threads receive updates from the different markets (each thread single market) and push those data into same queue: Appender线程从不同的市场(每个线程一个市场)接收更新,并将这些数据推送到同一队列中:

ChronicleQueue queue = SingleChronicleQueueBuilder.binary(path + "/market").build();
        final ExcerptTailer tailer = queue.createTailer();
appender.writeDocument(
                        wire -> {

                                wire
                                        .getValueOut().text("buy")
                                        .getValueOut().text(exchange.name())
                                        .getValueOut().text(currencyPair.toString())
                                        .getValueOut().dateTime(LocalDateTime.now(Clock.systemUTC()))
                                        .getValueOut().text(price);
                            });

Then I have completely separate process(different JVM) to continuously read from the queue by doing: 然后,我有一个完全独立的进程(不同的JVM),可以通过以下操作从队列中连续读取:

while (true){
     tailer.readDocument(........

But while I generate about 10 updates to the queue per second, the tailer process about one record per 3 seconds. 但是,虽然我每秒生成约10个队列更新,但是尾部处理过程每3秒约生成一条记录。 I think I am missing something fundamental here :-) 我想我在这里缺少一些基本的东西:-)

Or what is the correct way to continuously listen for updates on the queue? 还是在队列中连续侦听更新的正确方法是什么? I wasn't able to find any other solution than while (true) then do... 除了while(true)之外,我找不到其他解决方案,然后做...

I am developing on 18-core machine (36 threads) and use Java Affinity to assign each work to itc own CPU. 我正在18核计算机(36个线程)上进行开发,并使用Java Affinity将每个工作分配给它自己的CPU。

Thanks for any hints. 感谢您的任何提示。

Creating a queue is very expensive, try to only do this once per process if you can. 创建队列非常昂贵,请尽量只对每个进程执行一次。

Creating a Tailer is also expensive, you should create this once and keep polling for the updates. 创建尾随程序也很昂贵,您应该一次创建一次并保持轮询更新。

Creating objects can be expensive, I would avoid creating any objects. 创建对象可能会很昂贵,我会避免创建任何对象。 eg avoid calling toString or LocalDate.now 例如,避免调用toStringLocalDate.now

Here is an example of benchmarking 这是基准测试的示例

String path = OS.getTarget();
ChronicleQueue queue = SingleChronicleQueueBuilder.binary(path + "/market").build();
ExcerptAppender appender = queue.acquireAppender();
Exchange exchange = Exchange.EBS;
CurrencyPair currencyPair = CurrencyPair.EURUSD;
double price = 1.2345;
for (int t = 0; t < 5; t++) {
    long start = System.nanoTime();
    int messages = 100000;
    for (int i = 0; i < messages; i++) {
        try (DocumentContext dc = appender.writingDocument()) {
            ValueOut valueOut = dc.wire().getValueOut();
            valueOut.text("buy")
                    .getValueOut().asEnum(exchange)
                    .getValueOut().asEnum(currencyPair)
                    .getValueOut().int64(System.currentTimeMillis())
                    .getValueOut().float64(price);
        }
    }
    long time = System.nanoTime() - start;
    System.out.printf("Throughput was %,d messages per second%n", (long) (messages * 1e9 / time));
    Jvm.pause(100);
}

prints 版画

Throughput was 962,942 messages per second
Throughput was 2,952,433 messages per second
Throughput was 4,776,337 messages per second
Throughput was 3,250,235 messages per second
Throughput was 3,514,863 messages per second

And for reading you can do 为了阅读,你可以做

final ExcerptTailer tailer = queue.createTailer();
for (int t = 0; t < 5; t++) {
    long start = System.nanoTime();
    int messages = 100000;
    for (int i = 0; i < messages; i++) {
        try (DocumentContext dc = tailer.readingDocument()) {
            if (!dc.isPresent())
                throw new AssertionError("Missing t: " + t + ", i: " + i);
            ValueIn in = dc.wire().getValueIn();
            String buy = in.text();
            Exchange exchange2 = in.asEnum(Exchange.class);
            CurrencyPair currencyPair2 = in.asEnum(CurrencyPair.class);
            long time = in.int64();
            double price2 = in.float64();
        }
    }
    long time = System.nanoTime() - start;
    System.out.printf("Read Throughput was %,d messages per second%n", (long) (messages * 1e9 / time));
}

note: it reads the same number of messages as were written. 注意:它读取的消息数与写入的消息数相同。

prints 版画

Read Throughput was 477,849 messages per second
Read Throughput was 3,083,642 messages per second
Read Throughput was 5,100,516 messages per second
Read Throughput was 6,342,525 messages per second
Read Throughput was 6,672,971 messages per second

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM