简体   繁体   中英

Creating correct ElasticSearch indices using Logstash

I don't have access to the corporate ElasticSearch cluster and I use the Logstash configuration below to create indices and store serialized objects in ElasticSearch.

The problem with this solution is that fields are stored as incorrect types. For example, integer fields are stored as long in ElasticSearch.

input {
    http
    {
        host => "0.0.0.0"
        port => 9600
        codec => json
    }
}

output {
    elasticsearch { 
        hosts => ["elasticsearch:9200"] 
    }
    stdout { codec => rubydebug }
}

Is it possible to send a schema with the request? (using protobuf, trift, avro, etc.)

If not, is it possible to send the required ElasticSearch mapping with the request? (I can't use a template file because I don't have file access to Logstash either. And I have hundreds of different objects that make it impractical.)

Edit: I can't specify mutate logic for each field. There are hundreds of them.

You can add a mutate { convert } filter to your configuration file. Elasticsearch fields will automatically been mapped to the corresponding type. In your case:

input {
    http
    {
        host => "0.0.0.0"
        port => 9600
        codec => json
    }
}

filter {
  mutate { convert => ["my_field", "integer"]}
}

output {
    elasticsearch { 
        hosts => ["elasticsearch:9200"] 
    }
    stdout { codec => rubydebug }
}  

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM