[英]Creating correct ElasticSearch indices using Logstash
I don't have access to the corporate ElasticSearch cluster and I use the Logstash configuration below to create indices and store serialized objects in ElasticSearch. 我无权访问公司的ElasticSearch集群,并且使用下面的Logstash配置创建索引并将序列化的对象存储在ElasticSearch中。
The problem with this solution is that fields are stored as incorrect types. 该解决方案的问题在于字段存储为错误的类型。 For example, integer fields are stored as long in ElasticSearch.
例如,整数字段在ElasticSearch中存储的时间很长。
input {
http
{
host => "0.0.0.0"
port => 9600
codec => json
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
stdout { codec => rubydebug }
}
Is it possible to send a schema with the request? 是否可以发送带有请求的模式? (using protobuf, trift, avro, etc.)
(使用protobuf,trift,avro等)
If not, is it possible to send the required ElasticSearch mapping with the request? 如果不是,是否可以通过请求发送所需的ElasticSearch映射? (I can't use a template file because I don't have file access to Logstash either. And I have hundreds of different objects that make it impractical.)
(我也不能使用模板文件,因为我也没有对Logstash的文件访问权限。而且我有数百个不同的对象,这使其不切实际。)
Edit: I can't specify mutate logic for each field. 编辑:我不能为每个字段指定变异逻辑。 There are hundreds of them.
有数百个。
You can add a mutate { convert }
filter to your configuration file. 您可以将
mutate { convert }
过滤器添加到配置文件。 Elasticsearch fields will automatically been mapped to the corresponding type. Elasticsearch字段将自动映射到相应的类型。 In your case:
在您的情况下:
input {
http
{
host => "0.0.0.0"
port => 9600
codec => json
}
}
filter {
mutate { convert => ["my_field", "integer"]}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
stdout { codec => rubydebug }
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.