简体   繁体   English

在Kubernetes上安装Logstash并将日志发送到AWS ElasticSearch

[英]Installing Logstash on Kubernetes and Sending logs to AWS ElasticSearch

I am trying to set up a monitoring solution for apps running on Kubernetes via AWS elasticsearch. 我正在尝试为通过AWS Elasticsearch在Kubernetes上运行的应用程序设置监视解决方案。

To ship logs I am using filebeat --> Logstash --> AWS ElasticSearch and it's proving to be a nightmare so far :( 要发送日志,我正在使用filebeat-> Logstash-> AWS ElasticSearch,到目前为止,事实证明这是一场噩梦:(

To ship logs from logstash, I need to use amazon_es output plugin but I am getting different errors 要从logstash发送日志,我需要使用amazon_es输出插件,但出现了不同的错误

Below is the manifest file for logstash that I am using 下面是我正在使用的logstash的清单文件

kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: kube-system
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    # all input will come from filebeat, no local logs
    input {
      beats {
        port => 5044
      }
    }
    filter {
      if [message] =~ /^\{.*\}$/ {
        json {
          source => "message"
        }
      }
      if [ClientHost] {
        geoip {
          source => "ClientHost"
        }
      }
    }
    output {
        amazon_es {
            hosts => [ "https://vpc-eks***********.es.amazonaws.com:443" ]
            region => "eu-west-1"
            index => "devtest-logs-%{+YYYY.MM.dd}"            
        }
    }
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: logstash-deployment
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.1.0
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
  name: logstash-service
  namespace: kube-system
spec:
  selector:
    app: logstash
  ports:
  - protocol: TCP
    port: 5044
    targetPort: 5044
  type: ClusterIP
│ [INFO ] 2019-08-27 13:20:50.114 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"fb22e6cb-d7bb-4735-a05a-da4a2e4dabde", :path=>"/usr/share/logstash/data/uuid"}                              │
│ [ERROR] 2019-08-27 13:20:55.290 [Converge PipelineAction::Create<main>] registry - Tried to load a plugin's code, but failed. {:exception=>#<LoadError: no such file to load -- logstash/outputs/amazon_es>, :path=>"logstash/outp │
│ uts/amazon_es", :type=>"output", :name=>"amazon_es"}                                                                                                                                                                               │
│ [ERROR] 2019-08-27 13:20:55.295 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Could │
│ n't find any output plugin named 'amazon_es'. Are you sure this is correct? Trying to load the amazon_es output plugin resulted in this error: no such file to load -- logstash/outputs/amazon_es", :backtrace=>["/usr/share/logst │
│ ash/logstash-core/lib/logstash/plugins/registry.rb:211:in `lookup_pipeline_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:137:in `lookup'", "org/logstash/plugins/PluginFactoryExt.java:200:in `plugin'", "or │
│ g/logstash/plugins/PluginFactoryExt.java:137:in `buildOutput'", "org/logstash/execution/JavaBasePipelineExt.java:50:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "/usr/ │
│ share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]} 

so far i haven't been able to find anything as the container goes into crash loop. 到目前为止,由于容器进入崩溃循环,我无法找到任何东西。 Any help would be great 任何帮助都会很棒

you need to install amazon_es plugin to fix this issue. 您需要安装amazon_es插件才能解决此问题。

Include this in your dockerfile. 将此包含在您的dockerfile中。

RUN logstash-plugin install logstash-output-amazon_es

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM