繁体   English   中英

如何将日志从 Kubernetes 推送到弹性云部署?

[英]How to push logs from kubernetes to elastic cloud deployment?

我正在尝试配置在 kubernetes 中运行的logstashfilebeat ,以将日志从 kubernetes 集群连接并推送到我在弹性云中的部署。 我已经用主机用户名密码配置了logstash.yaml文件,请在下面找到配置:

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: ns-elastic
data:
  logstash.conf: |-
    input {
      beats {
          port => "9600"
      }
    }

    filter {

      fingerprint {
        source => "message"
        target => "[@metadata][fingerprint]"
        method => "MURMUR3"
      }  

      # Container logs are received with variable named index_prefix
      # Since it is in json format, we can decode it via json filter plugin.
      if [index_prefix] == "store-logs" {

        if [message] =~ /^\{.*\}$/ {
          json {
            source => "message"
            skip_on_invalid_json => true
          }
        }

      }
      if [index_prefix] == "ingress-" {

        if [message] =~ /^\{.*\}$/ {
          json {
            source => "message"
            skip_on_invalid_json => true
          }
        }

      }

      # do not expose index_prefix field to kibana
      mutate {
        # @metadata is not exposed outside of Logstash by default.
        add_field => { "[@metadata][index_prefix]" => "%{index_prefix}-%{+YYYY.MM.dd}" }
        # since we added index_prefix to metadata, we no longer need ["index_prefix"] field.
        remove_field => ["index_prefix"]
      }

    }

    output {
      # You can uncomment this line to investigate the generated events by the logstash.
      stdout { codec => rubydebug }
      elasticsearch {
          hosts => "https://******.es.*****.azure.elastic-cloud.com:9243"
          user => "username"
          password => "*****************"
          document_id => "%{[@metadata][fingerprint]}" 
          # The events will be stored in elasticsearch under previously defined index_prefix value.
          index => "%{[@metadata][index_prefix]}"
      }              
    }

但是,logstash 重新启动并出现以下错误:

[2022-06-19T17:32:31,943][INFO ][org.logstash.beats.Server][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] Starting server on port: 9600
[2022-06-19T17:32:38,154][ERROR][logstash.javapipeline    ][main][3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats port=>9600, id=>"3cdfe6dec21f50e50e275d7a0c7a3d34d8ead0610c72e80ef9c735c2ef53beb9", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_4b2c91f6-9a6f-4e5e-9a96-5b42e20cd0d9", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.3, cipher_suites=>["TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>1>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:459)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:448)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)

谁能帮我理解我在这里做错了什么? 我的最终目标是将日志从我的 kubernetes 集群推送到我在 Elastic Cloud 上部署的 elasticsearch 服务。 请协助,因为我无法获得足够的资源。

我们在您的日志中看到的错误说:

Error: Address already in use
Exception: Java::JavaNet::BindException

这意味着已经有一个进程绑定在 TCP/9600 端口上。

您可以使用netstat -plant来检查在您的主机上侦听的服务。 可能是另一个未正确关闭的 logstash 实例。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM