[英]How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes?
I have 2 applications (Application1, Application2) running on the Kubernetes cluster.我在 Kubernetes 集群上运行了 2 个应用程序(Application1、Application2)。 I would like to collect the logs from my applications from outside of the Kubernetes cluster and save them in different directories(for eg: /var/log/application1/application1-YYYYMMDD.log and /var/log/application2/application2-YYYYMMDD.log).
我想从 Kubernetes 集群外部收集我的应用程序的日志并将它们保存在不同的目录中(例如:/var/log/application1/application1-YYYYMMDD.log 和 /var/log/application2/application2-YYYYMMDD.日志)。
Therefore I deploy a filebeat DaemonSet on the Kubernetes cluster to fetch the logs from my applications(Application1, Application2) and run logstash service on the instance where I want to save the log files(outside of the Kubernetes cluster).因此,我在 Kubernetes 集群上部署了一个 filebeat DaemonSet,以从我的应用程序(Application1、Application2)中获取日志,并在我想要保存日志文件的实例上运行 logstash 服务(在 Kubernetes 集群之外)。
I create 2 filebeat.yml(filebeat-application1.yml and filebeat-application2.yml) files in configMap and then feed both files as args in DaemonSet(docker.elastic.co/beats/filebeat:7.10.1) as below.我在 configMap 中创建了 2 个 filebeat.yml(filebeat-application1.yml 和 filebeat-application2.yml) 文件,然后将这两个文件作为 args 提供给 DaemonSet(docker.elastic.co/beats/filebeat:7.10.1),如下所示。
....
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
.....
But only /etc/filebeat-application2.yml is affected.但只有 /etc/filebeat-application2.yml 受到影响。 Therefore, I get log only from application2.
因此,我只从 application2 获取日志。
Can you please help me about how to feed two filebeat configuration files into docker.elastic.co/beats/filebeat DaemonSet?您能帮我了解如何将两个 filebeat 配置文件输入 docker.elastic.co/beats/filebeat DaemonSet 吗? or how to config two "filebeat.autodiscovery:" rules with 2 separate "output.logstash:"?
或者如何使用 2 个单独的“output.logstash:”配置两个“filebeat.autodiscovery:”规则?
Below is my complete filebeat-kubernetes-whatsapp.yaml下面是我的完整文件beat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application1
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application1.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5045"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application2
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application2.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config-application1
mountPath: /etc/filebeat-application1.yml
readOnly: true
subPath: filebeat-application1.yml
- name: config-application2
mountPath: /etc/filebeat-application2.yml
readOnly: true
subPath: filebeat-application2.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
volumes:
- name: config-application1
configMap:
defaultMode: 0640
name: filebeat-config-application1
- name: config-application2
configMap:
defaultMode: 0640
name: filebeat-config-application2
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
It is not possible, filebeat supports only one output.不可能,filebeat只支持一个output。
From the documentation从文档
Only a single output may be defined.
只能定义一个 output。
You will need to send your logs to the same logstash instance and filter the output based on some field.您需要将日志发送到同一个 logstash 实例并根据某些字段过滤 output。
For example, assuming that you have the field kubernetes.pod.name
in the event sent to logstash, you could use something like this.例如,假设您在发送到 logstash 的事件中有字段
kubernetes.pod.name
,您可以使用类似这样的内容。
output {
if [kubernetes][pod][name] == "application1" {
your output for the application1 log
}
if [kubernetes][pod][name] == "application2" {
your output for the application2 log
}
}
I found the working way for my problem.我找到了解决问题的方法。 Maybe it is not the correct way but it can meet my requirement.
也许这不是正确的方法,但它可以满足我的要求。
filebeat-kubernetes-whatsapp.yaml filebeat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
/etc/logstash/conf.d/config.conf /etc/logstash/conf.d/config.conf
input {
beats {
port => 5044
}
}
#filter {
# ...
#}
output {
if "application1" in [kubernetes][pod][name] {
file {
enable_metric => false
gzip => false
codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}
path => "/abc/def/logs/application1%{+YYYY-MM-dd}.log"
}
}
if "application2" in [kubernetes][pod][name] {
file {
enable_metric => false
gzip => false
codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}
path => "/abc/def/logs/application2%{+YYYY-MM-dd}.log"
}
}
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.