简体   繁体   English

Logstash与elasticsearch输出:如何写入不同的索引?

[英]Logstash with elasticsearch output: how to write to different indices?

I hope to find here an answer to my question that I am struggling with since yesterday: 我希望在这里找到我昨天以来一直在努力的问题的答案:

I'm configuring Logstash 1.5.6 with a rabbitMQ input and an elasticsearch output. 我正在使用RabbitMQ输入和elasticsearch输出配置Logstash 1.5.6。

Messages are published in rabbitMQ in bulk format, my logstash consumes them and write them all to elasticsearch default index logstash-YYY.MM.DD with this configuration: 消息以RabbitMQ批量格式发布,我的logstash使用它们并将它们全部写入到具有以下配置的elasticsearch默认索引logstash-YYY.MM.DD中:

input {
  rabbitmq {
  host => 'xxx'
  user => 'xxx'
  password => 'xxx'
  queue => 'xxx'
  exchange => "xxx"
  key => 'xxx'
  durable => true
}

output {
  elasticsearch {
  host => "xxx"
  cluster => "elasticsearch"
  flush_size =>10
  bind_port => 9300
  codec => "json"
  protocol => "http"
  }
stdout { codec => rubydebug }
}

Now what I'm trying to do is send the messages to different elasticsearch indices. 现在,我正在尝试将消息发送到不同的Elasticsearch索引。

The messages coming from the amqp input already have the index and type parameters (bulk format). 来自amqp输入的消息已经具有索引和类型参数(批量格式)。

So after reading the documentation: https://www.elastic.co/guide/en/logstash/1.5/event-dependent-configuration.html#logstash-config-field-references 因此,在阅读文档后: https : //www.elastic.co/guide/zh-CN/logstash/1.5/event-dependent-configuration.html#logstash-config-field-references

I try doing that 我尝试这样做

input {
  rabbitmq {
  host => 'xxx'
  user => 'xxx'
  password => 'xxx'
  queue => 'xxx'
  exchange => "xxx"
  key => 'xxx'
  durable => true
}

output {
  elasticsearch {
  host => "xxx"
  cluster => "elasticsearch"
  flush_size =>10
  bind_port => 9300
  codec => "json"
  protocol => "http"
  index => "%{[index][_index]}"
  }
stdout { codec => rubydebug }
}

But what logstash is doing is create the index %{[index][_index]} and putting there all the docs instead of reading the _index parameter and sending there the docs ! 但是logstash所做的是创建索引%{[index] [_ index]}并放置所有文档,而不是读取_index参数并发送文档!

I also tried the following: 我还尝试了以下方法:

index => %{index}
index => '%{index}'
index => "%{index}"

But none seems to work. 但似乎没有任何作用。

Any help ? 有什么帮助吗?

To resume, the main question here is: If the rabbitMQ messages have this format: 要恢复,这里的主要问题是:如果rabbitMQ消息具有以下格式:

{"index":{"_index":"indexA","_type":"typeX","_ttl":2592000000}}
{"@timestamp":"2017-03-09T15:55:54.520Z","@version":"1","@fields":{DATA}}

How to tell to logstash to send the output in the index named "indexA" with type "typeX" ?? 如何告诉logstash在类型为“ typeX”的名为“ indexA”的索引中发送输出?

If your messages in RabbitMQ are already in bulk format then you don't need to use the elasticsearch output but a simple http output hitting the _bulk endpoint would do the trick: 如果您在RabbitMQ的消息已经在批量格式,那么你不需要使用elasticsearch输出,但一个简单的http输出击中_bulk端点会做的伎俩:

output {
    http {
        http_method => "post"
        url => "http://localhost:9200/_bulk"
        format => "message"
        message => "%{message}"
    }
}

So everyone, with the help of Val, the solution was: 因此,在Val的帮助下,每个人的解决方案是:

  • As he said since the RabbitMQ messages were already in bulk format, no need to use elasticsearch output, the http output to _bulk API will make it (silly me) 正如他说的那样,由于RabbitMQ消息已经是批量格式,因此无需使用elasticsearch输出,因此_bulk API的http输出将使其成功(愚蠢的我)
  • So I replaced the output with this: 所以我将输出替换为:

     output { http { http_method => "post" url => "http://172.16.1.81:9200/_bulk" format => "message" message => "%{message}" } stdout { codec => json_lines } } 
  • But it still wasn't working. 但是它仍然没有用。 I was using Logstash 1.5.6 and after upgrading to Logstash 2.0.0 ( https://www.elastic.co/guide/en/logstash/2.4/_upgrading_using_package_managers.html ) it worked with the same configuration. 我使用的是Logstash 1.5.6,升级到Logstash 2.0.0( https://www.elastic.co/guide/en/logstash/2.4/_upgrading_using_package_managers.html )后,它可以使用相同的配置。

There it is :) 那里是:)

If you store JSON message in Rabbitmq, then this problem can be solved. 如果将JSON消息存储在Rabbitmq中,则可以解决此问题。 Use index and type as field in JSON message and assign those values to Elasticsearch output plugin. 在JSON消息中使用index和type作为字段,并将这些值分配给Elasticsearch输出插件。

index => "%{index}" //INDEX from JSON body received from Kafka Producer document_type => "%{type}" } //TYPE from JSON body index =>“%{index}” //从卡夫卡生产者处收到的JSON正文中的索引document_type =>“%{type}”} //从JSON正文中获得TYPE

With this approach , each message can have their own index and type. 使用这种方法,每个消息可以具有自己的索引和类型。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM