[英]Logstash output is from another input
I have an issue where my beatmetric is caught by my http pipeline. 我有一个问题,我的节拍被我的http管道捕获。
Both Logstash, Elastic and Metricbeat is running in Kubernetes. Logstash,Elastic和Metricbeat都在Kubernetes中运行。
My beatmetric is setup to send to Logstash on port 5044 and log to a file in /tmp. 我的Beatmetric设置为发送到端口5044上的Logstash并记录到/ tmp中的文件。 This works fine.
这很好。 But whenever I create a pipeline with an
http
input, this seems to also catch beatmetric inputs and send them to index2
in Elastic as defined in the http
pipeline. 但是,每当我创建带有
http
输入的管道时,这似乎也捕获了节拍输入并将它们发送到http
管道中定义的Elastic中的index2
。
Why does it behave like this? 为什么会这样呢?
/usr/share/logstash/pipeline/http.conf /usr/share/logstash/pipeline/http.conf
input {
http {
port => "8080"
}
}
output {
#stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
/usr/share/logstash/pipeline/beats.conf /usr/share/logstash/pipeline/beats.conf
input {
beats {
port => "5044"
}
}
output {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
/usr/share/logstash/config/logstash.yml /usr/share/logstash/config/logstash.yml
pipeline.id: main
pipeline.workers: 1
pipeline.batch.size: 125
pipeline.batch.delay: 50
http.host: "0.0.0.0"
http.port: 9600
config.reload.automatic: true
config.reload.interval: 3s
/usr/share/logstash/config/pipeline.yml /usr/share/logstash/config/pipeline.yml
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
Even if you have multiple config files, they are read as a single pipeline by logstash, concatenating the inputs, filters and outputs, if you need to run then as separate pipelines you have two options. 即使您有多个配置文件,也可以通过logstash将它们作为单个管道读取,并连接输入,过滤器和输出,如果需要运行,则可以作为两个单独的管道运行。
Change your pipelines.yml
and create differents pipeline.id
, each one pointing to one of the config files. 更改您的
pipelines.yml
并创建不同的pipeline.id
,每个指向一个配置文件。
- pipeline.id: beats
path.config: "/usr/share/logstash/pipeline/beats.conf"
- pipeline.id: http
path.config: "/usr/share/logstash/pipeline/http.conf"
Or you can use tags
in your input
, filter
and output
, for example: 或者,您可以在
input
, filter
和output
使用tags
,例如:
input {
http {
port => "8080"
tags => ["http"]
}
beats {
port => "5044"
tags => ["beats"]
}
}
output {
if "http" in [tags] {
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
if "beats" in [tags] {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
}
Using the pipelines.yml
file is the recommended way to running multiple pipelines 建议使用
pipelines.yml
文件来运行多个管道
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.