简体   繁体   English

java - 如何在时间间隔后使用log4j2或logback批量收集日志并将其写入文件

[英]how to collect and write logs to file in batches after time interval using log4j2 or logback in java

I am trying to make a custom log appender using log4j2.我正在尝试使用 log4j2 制作自定义日志附加程序。 My problem is I don't want to write log immediately to a file appender but after.我的问题是我不想立即将日志写入文件附加程序,而是在之后。 So, ideally, my spring boot app should collect all the logs in some data structure and then trigger writing to file after the delay of 3 minutes in batches.所以,理想情况下,我的spring boot应用程序应该收集一些数据结构中的所有日志,然后在3分钟的延迟后批量触发写入文件。 (I should not use spring batch since it is not a batch application but a simple spring boot starter) (我不应该使用 spring batch,因为它不是一个批处理应用程序,而是一个简单的 spring boot starter)

When I was writing my custom provider for Logback (as a part of Loki4j project) I came up with a concise implementation of a thread-safebuffer that can trigger output operation by either batch size or timeout since last output.当我为 Logback 编写自定义提供程序(作为Loki4j项目的一部分)时,我想出了一个线程安全缓冲区的简明实现,它可以通过批处理大小或自上次输出以来的超时来触发输出操作。

Usage pattern:使用模式:

private static final LogRecord[] ZERO_EVENTS = new LogRecord[0];
private ConcurrentBatchBuffer<ILoggingEvent, LogRecord> buffer = new ConcurrentBatchBuffer<>(batchSize, LogRecord::create, (e, fl) -> eventFileLine(e, fl));


// somewhere in code where new log event arrives
var batch = buffer.add(event, ZERO_EVENTS);
if (batch.length > 0)
    handleBatch(batch);


// somewhere in scheduled method that triggers every timeoutMs
var batch = buffer.drain(timeoutMs, ZERO_EVENTS);
if (batch.length > 0)
    return handleBatch(batch).thenApply(r -> null);

// handling batches here
private void handleBatch(LogRecord[] lines) {
    // flush to file
}

Try looking at appenders that come with log4j2.尝试查看 log4j2 附带的附加程序。 Most of them implement functionality similar to what you describe.它们中的大多数实现的功能类似于您所描述的。 Eg RandomAccessFileAppender will write to file only after it receives complete batch of log events from async appenders infrastructure.例如RandomAccessFileAppender只有在它从异步 appender 基础设施接收到完整的一批日志事件后才会写入文件。 They all share OutputStreamManager or FileManager that encapsulates this logic:它们都共享封装此逻辑的OutputStreamManagerFileManager

protected synchronized void write(final byte[] bytes, final int offset, final int length, final boolean immediateFlush) {
        if (immediateFlush && byteBuffer.position() == 0) {
...

Unfortunately there seems to be no time baed solution for this.不幸的是,这似乎没有基于时间的解决方案。

I have written log4j2 appender for Loki and it has its own ring buffer and a thread that will send a batch when there is enough data or user specified timeout passed, here :我已经为 Loki编写了log4j2 appender ,它有自己的环形缓冲区和一个线程,当有足够的数据或用户指定的超时通过时,它将发送批处理, 这里

 if (exceededBatchSizeThreshold() || exceededWaitTimeThreshold(currentTimeMillis)) {
            try {
                httpClient.log(outputBuffer);
            } finally {
                outputBuffer.clear();
                timeoutDeadline = currentTimeMillis + batchWaitMillis;
            }
        }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM