简体   繁体   English

Fluentd Kubernetes Nodejs:错误:连接ECONNREFUSED 127.0.0.1:24224

[英]Fluentd Kubernetes Nodejs : Error: connect ECONNREFUSED 127.0.0.1:24224

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?编辑:我直接在我的 Express 应用程序中对 fluentd 服务 IP 进行了编码,它的工作..如何在不编码 ip 的情况下使其工作?

I have a couple of pods (nodejs + express server) running on a Kubernetes cluster.我在 Kubernetes 集群上运行了几个 pod (nodejs + express 服务器)

I'd like send logs from my nodejs pods to a Fluentd DeamonSet .我想将日志从我的nodejs pod发送到Fluentd DeamonSet

But I'm getting this error :但我收到此错误:

Fluentd error Error: connect ECONNREFUSED 127.0.0.1:24224

I'm using https://github.com/fluent/fluent-logger-node and my configuration is pretty simple:我正在使用https://github.com/fluent/fluent-logger-node ,我的配置非常简单:

const logger = require('fluent-logger')

logger.configure('pptr', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000
});

My fluentd conf file:我的 fluentd conf 文件:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

# Ignore fluent logs
<label @FLUENT_LOG>
  <match fluent.*>
    @type null
  </match>
</label>

<match pptr.**>
  @type elasticsearch
  host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
  ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
  user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
  reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
  type_name fluentd
  logstash_format true
</match>

Here's the Fluentd DeamonSet config file:这是 Fluentd DeamonSet 配置文件:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
          ports:
            - containerPort: 24224
          env:
            - name:  FLUENT_ELASTICSEARCH_HOST
              value: "xxx"
            - name:  FLUENT_ELASTICSEARCH_PORT
              value: "xxx"
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "https"
            # Option to configure elasticsearch plugin with self signed certs
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "true"
            # Option to configure elasticsearch plugin with tls
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERSION
              value: "TLSv1_2"
            # X-Pack Authentication
            # =====================
            - name: FLUENT_ELASTICSEARCH_USER
              value: "xxx"
            - name: FLUENT_ELASTICSEARCH_PASSWORD
              value: "xxx"
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          volumeMounts:
            - name: config-volume
              mountPath: /fluentd/etc/kubernetes.conf
              subPath: kubernetes.conf
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-conf
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

I also tried to deploy a service and expose the 24224 port :我还尝试部署服务并公开24224端口:

apiVersion: v1
kind: Service
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    app: fluentd
spec:
  ports:
    - name: "24224"
      port: 24224
      targetPort: 24224
  selector:
    k8s-app: fluentd-logging
status:
  loadBalancer: {}

Finally my express app (deployment) is here:最后我的快递应用程序(部署)在这里:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: puppet
  labels:
    app: puppet
spec:
  replicas: 5
  selector:
    matchLabels:
      app: puppet
  template:
    metadata:
      labels:
        app: puppet
    spec:
      containers:
        - name: puppet
          image: myrepo/my-image
          ports:
            - containerPort: 8080

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?编辑:我直接在我的 Express 应用程序中对 fluentd 服务 IP 进行了编码,它的工作..如何在不编码 ip 的情况下使其工作?

Focusing on below parts of the question:关注问题的以下部分:

I'd like send logs from my nodejs pods to a Fluentd DeamonSet.我想将日志从我的 nodejs pod 发送到 Fluentd DeamonSet。

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?编辑:我直接在我的 Express 应用程序中对 fluentd 服务 IP 进行了编码,它的工作..如何在不编码 ip 的情况下使其工作?

It looks like the communication between pods and the fluentd service is correct (hardcoding the IP works).看起来 pod 和fluentd服务之间的通信是正确的(硬编码 IP 工作)。 The issue here is the way they can communicate with each other.这里的问题是他们相互沟通的方式。

You can communicate with service fluentd by its name.您可以通过服务的名称与fluentd进行通信。 For example (from the inside of a pod):例如(从 pod 内部):

  • curl fluentd:24224

You can communicate with services by its name (like fluentd ) only in the same namespace.您只能通过名称(如fluentd )与相同命名空间中的服务进行通信。 If a service is in another namespace you would need to use it's full DNS name.如果服务位于另一个命名空间中,您将需要使用它的完整 DNS 名称。 It's template and example is following:它的模板和示例如下:

  • template: service-name.namespace.svc.cluster.local模板: service-name.namespace.svc.cluster.local
  • example: fluentd.kube-system.svc.cluster.local示例: fluentd.kube-system.svc.cluster.local

You can also use service of type ExternalName to map the full DNS name of your service to a shorter version like below:您还可以使用ExternalName类型的服务将您的服务的完整 DNS 名称映射到较短的版本,如下所示:


Assuming that (example):假设(示例):

  • You have created a nginx-namespace namespace:您已经创建了一个nginx-namespace命名空间:
    • $ kubectl create namespace nginx-namespace
  • You have an nginx Deployment inside the nginx-namespace and a service associated with it:您在nginx-namespace有一个nginx Deployment和一个与之关联的服务:
    • $ kubectl create deployment nginx --image=nginx --namespace=nginx-namespace
    • $ kubectl expose deployment nginx --port=80 --type=ClusterIP --namespace=nginx-namespace
  • You want to communicate with nginx Deployment from another namespace (ie default )您想从另一个命名空间(即default )与nginx Deployment通信

You have an option to communicate with above pod:您可以选择与上面的 pod 进行通信:

  • By the IP address of a Pod通过Pod的 IP 地址
    • 10.98.132.201
  • By a (full) DNS service name通过(完整的)DNS 服务名称
    • nginx.nginx-namespace.svc.cluster.local
  • By an ExternalName type of service that points to aa (full) DNS service name通过指向 aa(完整)DNS 服务名称的ExternalName类型的服务
    • nginx-service

The example of ExternalName type of service: ExternalName类型服务的示例:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default # <- the same as the pod communicating with the service
spec:
  type: ExternalName
  externalName: nginx.nginx-namespace.svc.cluster.local

You can pass this information to the pod by either:您可以通过以下任一方式将此信息传递给 pod:


Additional resources:其他资源:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 NodeJS - 错误:连接 ECONNREFUSED 127.0.0.1:port(chrome-remote-interface) - NodeJS - Error: connect ECONNREFUSED 127.0.0.1:port (chrome-remote-interface) [已解决]错误:连接 ECONNREFUSED 127.0.0.1:587 nodemailer NodeJS - [SOLVED]Error: connect ECONNREFUSED 127.0.0.1:587 nodemailer NodeJS NodeJS:Https 请求错误:连接 ECONNREFUSED 127.0.0.1:443 - NodeJS: Https Request Error: connect ECONNREFUSED 127.0.0.1:443 错误:连接ECONNREFUSED 127.0.0.1:3002 - Error: connect ECONNREFUSED 127.0.0.1:3002 错误:连接ECONNREFUSED 127.0.0.1:3100 - Error: connect ECONNREFUSED 127.0.0.1:3100 NodeJS和Heroku:连接ECONNREFUSED 127.0.0.1:27017 - NodeJS and Heroku: connect ECONNREFUSED 127.0.0.1:27017 Docker NodeJS 与 MySQL 连接 ECONNREFUSED 127.0.0.1:8000 - Docker NodeJS with MySQL connect ECONNREFUSED 127.0.0.1:8000 使用 Nodejs 连接 ECONNREFUSED 127.0.0.1:3000 - DB Mongoose - connect ECONNREFUSED 127.0.0.1:3000 - DB Mongoose with Nodejs Redis与127.0.0.1:6379的连接失败-连接ECONNREFUSED 127.0.0.1:6379 Node.js - Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379 nodejs 出现错误:连接 ECONNREFUSED 127.0.0.1:3306 - Getting Error: connect ECONNREFUSED 127.0.0.1:3306
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM