简体   繁体   English

python WatchedFileHandler在轮换后仍然写入旧文件

[英]python WatchedFileHandler still writing to old file after rotation

I've been using WatchedFileHandler as my python logging file handler, so that I can rotate my logs with logrotate (on ubuntu 14.04), which you know is what the docs say its for. 我一直在使用WatchedFileHandler作为我的python日志文件处理程序,因此我可以使用logrotate (在ubuntu 14.04上)旋转我的日志,你知道这是文档所说的。 My logrotate config files looks like 我的logrotate配置文件看起来像

/path_to_logs/*.log {
        daily
        rotate 365
        size 10M
        compress
        delaycompress
        missingok
        notifempty
        su root root
}

Everything seemed to be working just fine. 一切似乎都运转得很好。 I'm using logstash to ship my logs to my elasticsearch cluster and everything is great. 我正在使用logstash将我的日志发送到我的弹性搜索集群,一切都很棒。 I added a second log file for my debug logs which gets rotated but is not watched by logstash. 我为我的调试日志添加了第二个日志文件,该文件被轮换但没有被logstash监视。 I noticed that when that file is rotated, python just keeps writing to /path_to_debug_logs/*.log.1 and never starts writting to the new file. 我注意到当该文件被旋转时,python只是继续写入/path_to_debug_logs/*.log.1并且永远不会开始写入新文件。 If I manually tail /path_to_debug_logs/*.log.1 , it switches over instantly and starts writing to /path_to_debug_logs/*.log . 如果我手动拖尾/path_to_debug_logs/*.log.1 ,它会立即切换并开始写入/path_to_debug_logs/*.log

This seems REALLY weird to me. 这对我来说真的很奇怪。

I believe what is happening is that logstash is always tailing my non-debug logs, which some how triggers the switch over to the new file after logrotate is called. 我相信正在发生的事情是logstash总是拖尾我的非调试日志,有些如何在调用logrotate后触发切换到新文件。 If logrotate is called twice without a switch over, the log.1 file gets moved and compressed to log.2.gz, which python can no longer log to and logs are lost. 如果在没有切换的情况下调用logrotate两次,则会将log.1文件移动并压缩为log.2.gz,python无法再登录并且日志丢失。

Clearly there are a bunch of hacky solutions to this (such as a cronjob that tails all my logs every now and then), but I feel like I must be doing something wrong. 很明显,有一堆hacky解决方案(比如cronjob,偶尔会把我的所有日​​志都收尾),但我觉得我一定做错了。

I'm using WatchedFileHandler and logrotate instead of RotatingFileHandler for a number of reasons, but mainly because it will nicely compress my logs for me after rotation. 我使用WatchedFileHandlerlogrotate而不是RotatingFileHandler有很多原因,但主要是因为它会在轮换后很好地压缩我的日志。

UPDATE: 更新:

I tried the horrible hack of adding a manual tail to the end of my log rotation config script. 我尝试了在我的日志轮换配置脚本末尾添加手动尾部的可怕黑客攻击。

sharedscripts
postrotate
    /usr/bin/tail -n 1 path_to_logs/*.log.1
endscript

Sure enough this works most of the time, but randomly fails sometimes for no clear reason, so isn't a solution. 当然这大部分时间都可以工作,但有时因为没有明确原因而随机失败,所以不是解决方案。 I've also tried a number of less hacky solutions where I've modified the way WatchFileHandler checks if the file has changed, but no luck. 我还尝试了一些不太讨厌的解决方案,我已经修改了WatchFileHandler检查文件是否已更改的方式,但没有运气。

I'm fairly sure the root of my problem is that the logs are stored on a network drive, which is somehow confusing the file system. 我很确定我的问题的根源是日志存储在网络驱动器上,这在某种程度上混淆了文件系统。

I'm moving my rotation to python with RotatingFileHandler , but if anyone knows the proper way to handle this I'd love to know. 我正在使用RotatingFileHandler将我的旋转移动到python,但如果有人知道处理这个问题的正确方法,我很想知道。

Use copytruncate option of logrotate. 使用logrotate的copytruncate选项。 From docs 来自docs

copytruncate copytruncate

Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one, It can be used when some program can not be told to close its logfile and thus might continue writing (appending) to the previous log file forever. 在创建副本后截断原始日志文件,而不是移动旧日志文件并可选地创建新日志文件,当某些程序无法被告知关闭其日志文件时可以使用它,因此可以继续写入(追加)到以前的日志文件永远。 Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. 请注意,复制文件和截断文件之间的时间片非常小,因此某些日志记录数据可能会丢失。 When this option is used, the create option will have no effect, as the old log file stays in place. 使用此选项时,create选项将不起作用,因为旧的日志文件保持不变。

WatchedFileHandler does a rollover when a device and/or inode change is detected in the log file just before writing to it. WatchedFileHandler在写入之前在日志文件中检测到设备和/或inode更改时执行翻转。 Perhaps the file which isn't being watched by logstash doesn't see a change in its device/inode? 也许logstash没有观看的文件没有看到其设备/ inode的变化? That would explain why the handler keeps on writing to it. 这可以解释为什么处理程序继续写入它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM