[英]Asynchronous Logging
Right now in my application,at certain points we are logging some heavy stuff into the log files. 现在在我的应用程序中,在某些时候我们将一些重要的东西记录到日志文件中。
Basically only for logging we are creating JSON of the data available and then logging into Log files.This is business requirement to log data in JSON format . 基本上只针对日志记录,我们正在创建可用数据的JSON,然后登录到日志文件。这是以JSON格式记录数据的业务要求。
Now creating JSON from the data available and then logging to FILE takes lot of time and impacts the original request return time. 现在从可用数据创建JSON然后记录到FILE会花费大量时间并影响原始请求返回时间。 Now idea is to improve the sitation .
现在的想法是改善引用。
One of the things that we have discussed is to create a thread pool using 我们讨论的一件事是使用创建线程池
Executors.newSingleThreadExecutor()
in our code and then submitting the task to it which does the conversion of data into JSON and subsequent logging. 在我们的代码中,然后将任务提交给它,将数据转换为JSON和后续日志记录。
Is it a good approach to do this ?As we are managing the thread pool itself ,is it going to create some issues? 这是一个很好的方法吗?由于我们正在管理线程池本身,是否会产生一些问题?
I would appreciate if someone can share better solutions. 如果有人可以分享更好的解决方案,我将不胜 Someway to use Log4j for this .I tried to use AsyncAppender but didnt achieve any desired result.
在某种程度上使用Log4j。我试图使用AsyncAppender但没有达到任何预期的结果。 We are using EJB 3,Jboss 5.0,Log4j,java6.
我们使用的是EJB 3,Jboss 5.0,Log4j,java6。
I believe you are on right track in terms of using a separate thread pool for logging. 我相信您在使用单独的线程池进行日志记录方面处于正确的轨道上。 In lot of products you will see the asynchronous logging feature.
在许多产品中,您将看到异步日志记录功能。 Logs are accumulated and pushed to log files using a separate thread than the request thread.
使用与请求线程不同的线程累积日志并将其推送到日志文件。 Especially in prodcution environments, where are millions of incoming request and your response time need to be less than few seconds.
特别是在生产环境中,数百万的传入请求和响应时间需要少于几秒。 You cannot afford anything such as logging to slow down the system.
您无法负担任何诸如记录以减慢系统速度的任何事情。 So the approach used is to add logs in a memory buffer and push them asynchronously in reasonably sized chunks.
因此,使用的方法是在内存缓冲区中添加日志,并以合理大小的块异步推送它们。
A word of caution while using thread pool for logging As multiple threads will be working on the log file(s) and on a memory log buffer, you need to be careful about the logging. 使用线程池进行日志记录时需要注意的事项由于多个线程将处理日志文件和内存日志缓冲区,因此需要注意日志记录。 You need to add logs in a FIFO kind of a buffer to be sure that logs are printed in the log files sorted by time stamp.
您需要在FIFO类型的缓冲区中添加日志,以确保在按时间戳排序的日志文件中打印日志。 Also make sure the file access is synchronized and you don't run into situation where log file is all upside down or messed up.
还要确保文件访问是同步的,并且您不会遇到日志文件全部颠倒或混乱的情况。
看看Logback ,AsyncAppender它已经提供了单独的线程池,队列等,并且很容易配置,它几乎与你正在做的相同,但是可以避免重新发明轮子。
Is using MongoDB for logging considered? 是否考虑使用MongoDB进行日志记录 ?
还有log4j 2: http : //logging.apache.org/log4j/2.x/manual/async.html另外阅读这篇文章关于为什么这么快: http : //www.grobmeier.de/log4j- 2,性能贴近疯狂-20072013.html#.UzwywI9Bow4
You can also try CoralLog to asynchronously log data using the disruptor pattern. 您还可以使用disruptor模式尝试使用CoralLog异步记录数据。 That way you spend minimum time in the logger thread and all the hard work is passed to the thread doing the actual file I/O.
这样你就可以在记录器线程中花费最少的时间,所有的辛苦工作都会传递给执行实际文件I / O的线程。 It also provides Memory Mapped Files to speed up the consumer thread and reduce queue contention.
它还提供内存映射文件以加速使用者线程并减少队列争用。
Disclaimer: I am one of the developers of CoralLog 免责声明:我是CoralLog的开发者之一
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.