简体   繁体   English

Logstash输出来自另一个输入

[英]Logstash output is from another input

I have an issue where my beatmetric is caught by my http pipeline. 我有一个问题,我的节拍被我的http管道捕获。

Both Logstash, Elastic and Metricbeat is running in Kubernetes. Logstash,Elastic和Metricbeat都在Kubernetes中运行。

My beatmetric is setup to send to Logstash on port 5044 and log to a file in /tmp. 我的Beatmetric设置为发送到端口5044上的Logstash并记录到/ tmp中的文件。 This works fine. 这很好。 But whenever I create a pipeline with an http input, this seems to also catch beatmetric inputs and send them to index2 in Elastic as defined in the http pipeline. 但是,每当我创建带有http输入的管道时,这似乎捕获了节拍输入并将它们发送到http管道中定义的Elastic中的index2

Why does it behave like this? 为什么会这样呢?

/usr/share/logstash/pipeline/http.conf /usr/share/logstash/pipeline/http.conf

input {
  http {
    port => "8080"
  }
}

output {

  #stdout { codec => rubydebug }

  elasticsearch {

    hosts => ["http://my-host.com:9200"]
    index => "test2"
  }
}

/usr/share/logstash/pipeline/beats.conf /usr/share/logstash/pipeline/beats.conf

input {
    beats {
        port => "5044"
    }
}

output {
    file {
        path => '/tmp/beats.log'
        codec => "json"
    }
}

/usr/share/logstash/config/logstash.yml /usr/share/logstash/config/logstash.yml

pipeline.id: main
pipeline.workers: 1
pipeline.batch.size: 125
pipeline.batch.delay: 50
http.host: "0.0.0.0"
http.port: 9600
config.reload.automatic: true
config.reload.interval: 3s

/usr/share/logstash/config/pipeline.yml /usr/share/logstash/config/pipeline.yml

- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline"

Even if you have multiple config files, they are read as a single pipeline by logstash, concatenating the inputs, filters and outputs, if you need to run then as separate pipelines you have two options. 即使您有多个配置文件,也可以通过logstash将它们作为单个管道读取,并连接输入,过滤器和输出,如果需要运行,则可以作为两个单独的管道运行。

Change your pipelines.yml and create differents pipeline.id , each one pointing to one of the config files. 更改您的pipelines.yml并创建不同的pipeline.id ,每个指向一个配置文件。

- pipeline.id: beats
  path.config: "/usr/share/logstash/pipeline/beats.conf"
- pipeline.id: http
  path.config: "/usr/share/logstash/pipeline/http.conf"

Or you can use tags in your input , filter and output , for example: 或者,您可以在inputfilteroutput使用tags ,例如:

input {
  http {
    port => "8080"
    tags => ["http"]
  }
  beats {
    port => "5044"
    tags => ["beats"]
  }
}
output {
 if "http" in [tags] {
      elasticsearch {
        hosts => ["http://my-host.com:9200"]
        index => "test2"
      }
  }
 if "beats" in [tags] {
      file {
        path => '/tmp/beats.log'
        codec => "json"
      }
  }
}

Using the pipelines.yml file is the recommended way to running multiple pipelines 建议使用pipelines.yml文件来运行多个管道

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何从 logstash 连接 logstash? - How to connect logstash from logstash? Logstash Elasticsearch output 给出 401 错误 - Logstash Elasticsearch output gives 401 error 如何在kubernetes的单个filebeat DaemonSet中声明多个output.logstash? - How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes? Logstash,如何使用来自事件数据的grok模式 - Logstash, how to use grok patterns coming from event data 无法将 output 参数从一个工作流模板传递到另一个工作流模板的工作流 - Unable to pass output parameters from one workflowTemplate to a workflow via another workflowTemplate Minikube安装失败,出现输入/输出错误 - Minikube mount fails with input/output error chown: 改变 '' 的所有权": 输入/输出错误 - chown: changing ownership of '' ": Input/output error 如何将运行在GCP之上的Kubernetes上的pod中的日志发送到elasticsearch / logstash? - How to ship logs from pods on Kubernetes running on top of GCP to elasticsearch/logstash? 将 tcpdump 输出存储在另一个 POD 中 - kubernetes - store tcpdump output in another POD - kubernetes ConfigMap 值作为容器内另一个变量的输入 - ConfigMap value as input for another variable inside container
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM