简体   繁体   中英

Log4net buffer not flushing when full in lossy setup

I'm using Log4net ElasticSearchAppender in my C# webAPI with a BufferSize of 10 and Lossy set to true to preserve performance, as seen here : https://github.com/bruno-garcia/log4net.ElasticSearch/wiki/02-Appender-Settings

<lossy value="false"/> Log4net.ElasticSearch uses a buffer to collect events and then flush them to the Elasticsearch server on a background thread. Setting this value to true will cause log4net.Elasticsearch to begin discarding events if the buffer is full and has not been flushed. This could happen if the Elasticsearch server becomes unresponsive or goes offline.

I also set the evaluator to ERROR , that will force the flushing of the buffer anyway if an ERROR occurs.

Here's the associated config file :

<?xml version="1.0"?>
<log4net>
    <appender name="ElasticSearchAppender" type="log4net.ElasticSearch.ElasticSearchAppender, log4net.ElasticSearch">
    <threshold value="ALL" />
    <layout type="log4net.Layout.PatternLayout,log4net">
      <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p %c{1}:%L - %m%n" />
    </layout>
    <connectionString value="Server=my-elasticsearch-server;Index=foobar;Port=80;rolling=true;mode=tcp"/>
    <lossy value="true" />
    <bufferSize value="10" />
    <evaluator type="log4net.Core.LevelEvaluator">
      <threshold value="ERROR" />
    </evaluator>
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="ElasticSearchAppender" />
  </root>
</log4net>

Here's the behaviour I get : the flushing triggered by an ERROR (evaluator) is working fine, but INFO or DEBUG messages alone are never flushed to Elastic , even if there are 10, 20, or 100 of them.

The buffer does never flush when full in this configuration, it just keeps discarding DEBUG or INFO logs until an ERROR comes out, even though Elastic is online and perfectly responsive.

Note: I tried setting lossy to false, and the buffer flushes when full. But I'm affraid this would damage my application responsiveness too much.

Am I gettings something wrong? Is there a better way to log while minimizing performance impact?

After testing the behaviour, here's what I found :

The buffer becoming full does never trigger a flushing when lossy is true.

Bruno garcia 's article was quite misleading about the Lossy property, especially this sentence :

Setting this value to true will cause (...) to begin discarding events if the buffer is full (...). This could happen if the Elasticsearch server becomes unresponsive or goes offline.

In fact it has nothing to do with the appender/Elastic being unresponsive : in a lossy configuration, only evaluators will trigger the flushing of the buffer :

  • Level evaluator, will flush if an event of a certain lever occurs (ex: FATAL or ERROR), giving the context of the crash (=last logs occuring before the crash).

     <evaluator type="log4net.Core.LevelEvaluator"> <threshold value="ERROR"/> </evaluator>
  • Time evaluator will flush if a certain time interval has elapsed

    <evaluator type="log4net.Core.TimeEvaluator"> <interval value="300"/> </evaluator>

For my purpose I finally decided to configure a TimeEvaluator with a 5 minutes interval. This way, as long as there is no more than 200 logs (my buffer size) per 5 minutes, no log will be discarded and the impact on performance is kept low.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM