简体   繁体   中英

Multiple instances of same application server cause 2 problems

I have two server instances for an asp.net app. When we moved to cluster I saw that some of my log (rolling) files are just 2-3kb instead of 15mb. When my app was on one server this worked perfectly. This is my configuration for log4net:

<appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
  <file type="log4net.Util.PatternString" value="\\file_server\\MyLog.xml"/>
  <appendToFile value="true"/>
  <lockingModel type="log4net.Appender.FileAppender+MinimalLock"/>
  <datePattern value="ddMMyyyy"/>
  <rollingStyle value="Size"/>
  <maxSizeRollBackups value="14"/>
  <maximumFileSize value="15360KB"/>
  <staticLogFileName value="true"/>
  <countDirection value="1"/>
  <layout type="log4net.Layout.XmlLayoutSchemaLog4j">
    <locationInfo value="true"/>
  </layout>
</appender>

I'm not sure if this is a bug or something else. In addition, I found here some issue regarding this, but no solution: https://issues.apache.org/jira/browse/LOG4J2-174

Your problems is because you are writing to the same log file from multiple servers ("value="\\file_server\\MyLog.xml"). This is going to be bad news from a couple of perspectives.

  1. The file is likely to be locked when one of the instances wants to use it, and the log file will not get updated.
  2. Every single log you make generates network traffic, which is the slowest traffic of all. If you are writing Async then you might not notice this, but its best to write logs locally, then correllate them using a log reader tool like Splunk.

Too long for a comment.

The log4net documentation is very clear on this :

Before you even start trying any of the alternatives provided, ask yourself whether you really need to have multiple processes log to the same file, then don't do it ;-).

FileAppender offers pluggable locking models for this usecase but all existing implementations have issues and drawbacks.

By default the FileAppender holds an exclusive write lock on the log file while it is logging. This prevents other processes from writing to the file. This model is known to break down with (at least on some versions of) Mono on Linux and log files may get corrupted as soon as another process tries to access the log file.

MinimalLock only acquires the write lock while a log is being written. This allows multiple processes to interleave writes to the same file, albeit with a considerable loss in performance.

InterProcessLock doesn't lock the file at all but synchronizes using a system wide Mutex. This will only work if all processes cooperate (and use the same locking model). The acquisition and release of a Mutex for every log entry to be written will result in a loss of performance, but the Mutex is preferable to the use of MinimalLock.

If you use RollingFileAppender things become even worse as several process may try to start rolling the log file concurrently. RollingFileAppender completely ignores the locking model when rolling files, rolling files is simply not compatible with this scenario.

A better alternative is to have your processes log to RemotingAppenders. Using the RemoteLoggingServerPlugin (or IRemoteLoggingSink) a process can receive all the events and log them to a single log file. One of the examples shows how to use the RemoteLoggingServerPlugin.

Or you could log to a database instead.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM