简体   繁体   English

log4j滚动文件追加器-多线程问题?

[英]log4j Rolling file appender - multi-threading issues?

Are there any known bugs with the Log4J rolling file appender. Log4J滚动文件附加程序是否存在任何已知的错误。 I have been using log4j happily for a number of years but was not aware of this. 我已经愉快地使用log4j多年了,但并没有意识到这一点。 A colleague of mine is suggesting that there are known issues ( and i found one a Bugzilla entry on this) where under heavy load,the rolling file appender (we use the time-based one) might not perform correctly when the rollover occurs @ midnight. 我的一位同事建议存在一些已知问题(并且我在其中发现了一个Bugzilla条目),在重负载下,滚动转换附加程序(我们使用基于时间的附加程序)可能会在午夜发生翻转时无法正确执行。

Bugzilla entry - https://issues.apache.org/bugzilla/show_bug.cgi?id=44932 Bugzilla条目-https://issues.apache.org/bugzilla/show_bug.cgi ? id =44932

Appreciate inputs and pointers on how others overcome this. 赞赏他人如何克服这一问题的投入和指点。

Thanks, Manglu 谢谢,Manglu

I have not encountered this issue myself, and from the bug report, I would suspect that it is very uncommon. 我自己没有遇到此问题,并且从错误报告中,我怀疑它是非常罕见的。 Th Log4j RollingFileAppender has always worked in a predictable and reliable fashion for the apps I have developed and maintained. Log4j RollingFileAppender对于我开发和维护的应用程序始终以可预测和可靠的方式工作。

This particular bug, If I understand correctly, would only happen if there are multiple instances of Log4j, like if you had multiple instances of the same app running simultaneously, writing to the same log file. 如果我正确理解,则此特定错误仅在存在Log4j的多个实例的情况下才会发生,例如,如果您同时运行同一应用程序的多个实例,并写入同一日志文件。 Then, when it is rollover time, one instance cannot get a lock on the file in order to delete it and archive its contents, resulting in the loss of the data that was to be archived. 然后,到了过渡时间时,一个实例无法获得文件上的锁以删除它并存档其内容,从而导致丢失了要存档的数据。

I cannot speak to any of the other known bugs your colleague mentioned unless you would like to cite them specifically. 除非您要具体引用这些错误,否则我无法与您的同事提及的其他任何已知错误。 In general, I believe Log4j is reliable for production apps. 总的来说,我相信Log4j对于生产应用程序是可靠的。

@kg, this happens to me too. @kg,这也发生在我身上。 This exact situation. 这种确切的情况。 2 instances of the same program. 同一程序的2个实例。 I updated it to the newer rolling.RollingFileAppender instead of using DailyFileRoller( whatever it was called ). 我将其更新为较新的rolling.RollingFileAppender,而不是使用DailyFileRoller(无论它叫什么)。

I run two instances simultenously via crontab. 我通过crontab同时运行两个实例。 The instances output as many messages as they can until 5 seconds is reached. 实例输出尽可能多的消息,直到达到5秒钟。 They measure the time for 1 second by using System.currentTimeMillis, and append to a counter to estimate a 5 second timeperiod for the loop. 他们使用System.currentTimeMillis测量1秒钟的时间,并附加到计数器以估计循环的5秒钟时间。 So there's minimum overhead in this test. 因此,此测试的开销最小。 Each output log message contains an incrementing number, and the message contains identifiers set from commandline to be able to separate them. 每个输出日志消息都包含一个递增的数字,并且该消息包含从命令行设置的标识符,以便能够将它们分开。

From putting the log message order together, one of the processes succeeds in writing from the start to end of the sequence, the other one loses the first entries of its output (from 0 onward). 通过将日志消息顺序放在一起,其中一个过程成功地从序列的开始到结束进行写入,另一个过程则丢失了其输出的第一个条目(从0开始)。

This really ought to be fixed... 这确实应该解决...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM