简体   繁体   English

读取 Logstash 日志但不会推送到 elasticsearch

[英]Logstash logs are read but are not pushed to elasticsearch

I have the following logstash configuration:我有以下 logstash 配置:

input {
  file {
    codec => "json_lines"
    path => ["/etc/logstash/input.log"]
    sincedb_path => "/etc/logstash/dbfile"
    start_position => "beginning"
    ignore_older => "0"
  }
}
output {
   elasticsearch {
      hosts => ["192.168.169.46:9200"]
   }
   stdout {
      codec => rubydebug
   }
}

The /etc/logstash/input.log file is populated with logs from a running java application. /etc/logstash/input.log文件填充了来自正在运行的 Java 应用程序的日志。 The logs are in the following json format (they are written inline separated by the \\n character):日志采用以下 json 格式(它们以\\n字符分隔的行内写入):

{
"exception": {
    "exception_class": "java.lang.RuntimeException",
    "exception_message": "Test runtime exception stack: 0",
    "stacktrace": "java.lang.RuntimeException: Test runtime exception stack: 0"
},
"@version": 1,
"source_host": "WS-169-046",
"message": "Test runtime exception stack: 0",
"thread_name": "parallel-1",
"@timestamp": "2019-12-02T16:30:14.084+02:00",
"level": "ERROR",
"logger_name": "nl.hnf.logs.aggregator.demo.LoggingTest",
"aplication-name": "demo-log-aggregation"
}

I also updated the logstash default template using the elasticsearch API(Put request body at: http://192.168.169.46:9200/_template/logstash?pretty ):我还使用 elasticsearch API 更新了 logstash 默认模板(将请求正文放在: http://192.168.169.46:9200/_template/logstash?pretty : http://192.168.169.46:9200/_template/logstash?pretty :9200/_template/logstash?pretty):

{
"index_patterns": "logstash-*",
"version": 60002,
"settings": {
    "index.refresh_interval": "5s",
    "number_of_shards": 1
},
"mappings": {
    "dynamic_templates": [
        {
            "message_field": {
                "path_match": "message",
                "match_mapping_type": "string",
                "mapping": {
                    "type": "text",
                    "norms": false
                }
            }
        },
        {
            "string_fields": {
                "match": "*",
                "match_mapping_type": "string",
                "mapping": {
                    "type": "text",
                    "norms": false,
                    "fields": {
                        "keyword": {
                            "type": "keyword",
                            "ignore_above": 256
                        }
                    }
                }
            }
        }
    ],
    "properties": {
        "@timestamp": {
            "type": "date"
        },
        "@version": {
            "type": "keyword"
        },
        "source_host": {
            "type": "keyword"
        },
        "message": {
            "type": "text"
        },
        "thread_name": {
            "type": "text"
        },
        "level": {
            "type": "keyword"
        },
        "logger_name": {
            "type": "keyword"
        },
        "aplication_name": {
            "type": "keyword"
        },
        "exception": {
            "dynamic": true,
            "properties": {
                "exception_class": {
                    "type": "text"
                },
                "exception_message": {
                    "type": "text"
                },
                "stacktrace": {
                    "type": "text"
                }
            }
        }
    }
}

Elasticsearch responds with "acknowledged": true and I can see the template being updated via API. Elasticsearch 以"acknowledged": true响应,我可以看到正在通过 API 更新模板。 Now starting logstash with debug log level i see the input logs being read but not sent to elasticsearch, although the index is created but it's always empty(0 documents):现在以debug日志级别启动logstash,我看到输入日志正在读取但未发送到elasticsearch,尽管索引已创建但它始终为空(0 个文档):

[2019-12-03T09:30:51,655][DEBUG][logstash.inputs.file     ][custom] Received line {:path=>"/etc/logstash/input.log", :text=>"{\"@version\":1,\"source_host\":\"ubuntu\",\"message\":\"Generating some logs: 65778 - 2019-12-03T09:30:50.775\",\"thread_name\":\"parallel-1\",\"@timestamp\":\"2019-12-03T09:30:50.775+00:00\",\"level\":\"INFO\",\"logger_name\":\"nl.hnf.logs.aggregator.demo.LoggingTest\",\"aplication-name\":\"demo-log-aggregation\"}"}
[2019-12-03T09:30:51,656][DEBUG][filewatch.sincedbcollection][custom] writing sincedb (delta since last write = 1575365451)

Also, the elasticsearch logs are on debug level too, but i don't see any errors there or anything that could give me a hint about the source of the problem.此外,elasticsearch 日志也处于debug级别,但我没有看到任何错误或任何可以提示我问题根源的内容。

Do you guys have any idea or suggestion on why the logs are not pushed to elasticsearch?你们对为什么不将日志推送到 elasticsearch 有任何想法或建议吗?

In filebeat, setting ignore_older to zero means "do not check how old the file is".在 filebeat 中,将 ignore_older 设置为零意味着“不检查文件的年龄”。 For a logstash file input, it means "ignore files more than zero seconds old", which is effectively "ignore everything".对于logstash 文件输入,它意味着“忽略超过零秒的文件”,这实际上是“忽略所有内容”。 Delete it.删除它。 If that does not help, then increase the log level to trace, and see what the filewatch module says about the files it is monitoring.如果这没有帮助,则增加日志级别以进行跟踪,并查看 filewatch 模块对它正在监视的文件所说的内容。

Fixed the issue by using json codec instead of json_lines and also removing start_position , ignore_older and sincedb_path通过使用json编解码器而不是json_lines并删除start_positionignore_oldersincedb_path修复了该问题

input {
  file {
    codec => "json"
    path => ["/etc/logstash/input.log"]
  }
}
output {
   elasticsearch {
      hosts => ["192.168.169.46:9200"]
   }
   stdout {
      codec => rubydebug
   }
}

Also the json_lines codec seems to be incompatible with the file input( \\n separator it's not working as expected)此外json_lines编解码器似乎与文件输入不兼容( \\n分隔符它没有按预期工作)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM