简体   繁体   中英

Fluentd Kubernetes Nodejs : Error: connect ECONNREFUSED 127.0.0.1:24224

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?

I have a couple of pods (nodejs + express server) running on a Kubernetes cluster.

I'd like send logs from my nodejs pods to a Fluentd DeamonSet .

But I'm getting this error :

Fluentd error Error: connect ECONNREFUSED 127.0.0.1:24224

I'm using https://github.com/fluent/fluent-logger-node and my configuration is pretty simple:

const logger = require('fluent-logger')

logger.configure('pptr', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000
});

My fluentd conf file:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

# Ignore fluent logs
<label @FLUENT_LOG>
  <match fluent.*>
    @type null
  </match>
</label>

<match pptr.**>
  @type elasticsearch
  host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
  ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
  user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
  reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
  type_name fluentd
  logstash_format true
</match>

Here's the Fluentd DeamonSet config file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
          ports:
            - containerPort: 24224
          env:
            - name:  FLUENT_ELASTICSEARCH_HOST
              value: "xxx"
            - name:  FLUENT_ELASTICSEARCH_PORT
              value: "xxx"
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "https"
            # Option to configure elasticsearch plugin with self signed certs
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "true"
            # Option to configure elasticsearch plugin with tls
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERSION
              value: "TLSv1_2"
            # X-Pack Authentication
            # =====================
            - name: FLUENT_ELASTICSEARCH_USER
              value: "xxx"
            - name: FLUENT_ELASTICSEARCH_PASSWORD
              value: "xxx"
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          volumeMounts:
            - name: config-volume
              mountPath: /fluentd/etc/kubernetes.conf
              subPath: kubernetes.conf
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-conf
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

I also tried to deploy a service and expose the 24224 port :

apiVersion: v1
kind: Service
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    app: fluentd
spec:
  ports:
    - name: "24224"
      port: 24224
      targetPort: 24224
  selector:
    k8s-app: fluentd-logging
status:
  loadBalancer: {}

Finally my express app (deployment) is here:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: puppet
  labels:
    app: puppet
spec:
  replicas: 5
  selector:
    matchLabels:
      app: puppet
  template:
    metadata:
      labels:
        app: puppet
    spec:
      containers:
        - name: puppet
          image: myrepo/my-image
          ports:
            - containerPort: 8080

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?

Focusing on below parts of the question:

I'd like send logs from my nodejs pods to a Fluentd DeamonSet.

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?

It looks like the communication between pods and the fluentd service is correct (hardcoding the IP works). The issue here is the way they can communicate with each other.

You can communicate with service fluentd by its name. For example (from the inside of a pod):

  • curl fluentd:24224

You can communicate with services by its name (like fluentd ) only in the same namespace. If a service is in another namespace you would need to use it's full DNS name. It's template and example is following:

  • template: service-name.namespace.svc.cluster.local
  • example: fluentd.kube-system.svc.cluster.local

You can also use service of type ExternalName to map the full DNS name of your service to a shorter version like below:


Assuming that (example):

  • You have created a nginx-namespace namespace:
    • $ kubectl create namespace nginx-namespace
  • You have an nginx Deployment inside the nginx-namespace and a service associated with it:
    • $ kubectl create deployment nginx --image=nginx --namespace=nginx-namespace
    • $ kubectl expose deployment nginx --port=80 --type=ClusterIP --namespace=nginx-namespace
  • You want to communicate with nginx Deployment from another namespace (ie default )

You have an option to communicate with above pod:

  • By the IP address of a Pod
    • 10.98.132.201
  • By a (full) DNS service name
    • nginx.nginx-namespace.svc.cluster.local
  • By an ExternalName type of service that points to aa (full) DNS service name
    • nginx-service

The example of ExternalName type of service:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default # <- the same as the pod communicating with the service
spec:
  type: ExternalName
  externalName: nginx.nginx-namespace.svc.cluster.local

You can pass this information to the pod by either:


Additional resources:

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM