[英]Forwarding logs from kubernetes to splunk
I'm pretty much new to Kubernetes and don't have hands-on experience on it.我对 Kubernetes 还很陌生,没有实际操作经验。
My team is facing issue regarding the log format pushed by我的团队面临有关推送的日志格式的问题
kubernetes to splunk .
kubernetes到splunk 。
{"logname" : "app-log", "level" : "INFO"}
{
"log" : "{\"logname\": \"app-log\", \"level\": \"INFO \"}",
"stream" : "stdout",
"time" : "2018-06-01T23:33:26.556356926Z"
}
This format kind of make things harder in Splunk to query based on properties.这种格式使 Splunk 中的事情更难基于属性进行查询。
Is there any options in Kubernetes
to forward raw logs
from app rather than grouping into another json ? Kubernetes
是否有任何选项可以从应用程序转发raw logs
而不是分组到另一个 json 中?
I came across this post in Splunk, but the configuration is done on Splunk side我在 Splunk 中遇到过这篇文章,但配置是在 Splunk 端完成的
Please let me know if we have any option from Kubernetes
side to send raw logs from application请让我知道我们是否可以从
Kubernetes
方面选择从应用程序发送原始日志
Kubernetes architecture provides three ways to gather logs: Kubernetes 架构提供了三种收集日志的方式:
1. Use a node-level logging agent that runs on every node. 1. 使用在每个节点上运行的节点级日志代理。
You can implement cluster-level logging by including a node-level logging agent on each node.您可以通过在每个节点上包含一个节点级别的日志代理来实现集群级别的日志记录。 The logging agent is a dedicated tool that exposes logs or pushes logs to a backend.
日志代理是一种专用工具,用于公开日志或将日志推送到后端。 Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
通常,日志代理是一个容器,它可以访问包含来自该节点上所有应用程序容器的日志文件的目录。
The logs format depends on Docker settings.日志格式取决于 Docker 设置。 You need to set up
log-driver
parameter in /etc/docker/daemon.json
on every node.您需要在每个节点上的
/etc/docker/daemon.json
设置log-driver
参数。
For example,例如,
{
"log-driver": "syslog"
}
or或者
{
"log-driver": "json-file"
}
For more options, check the link有关更多选项,请查看链接
2. Include a dedicated sidecar container for logging in an application pod. 2. 包含一个专用的 sidecar 容器,用于登录应用程序 pod。
You can use a sidecar container in one of the following ways:您可以通过以下方式之一使用边车容器:
By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node.通过让您的 sidecar 容器流式传输到它们自己的 stdout 和 stderr 流,您可以利用已经在每个节点上运行的 kubelet 和日志代理。 The sidecar containers read logs from a file, a socket, or the journald.
Sidecar 容器从文件、套接字或日志中读取日志。 Each individual sidecar container prints log to its own stdout or stderr stream.
每个单独的 sidecar 容器将日志打印到自己的 stdout 或 stderr 流。
3. Push logs directly to a backend from within an application. 3. 将日志从应用程序内直接推送到后端。
You can implement cluster-level logging by exposing or pushing logs directly from every application.您可以通过直接从每个应用程序公开或推送日志来实现集群级别的日志记录。
For more information, you can check official documentation of Kubernetes更多信息可以查看Kubernetes 官方文档
This week we had the same issue.本周我们遇到了同样的问题。
Using splunk forwarder DaemonSet使用 splunk 转发器 DaemonSet
installing https://splunkbase.splunk.com/app/3743/ this plugin on splunk will solve your issue.在 splunk 上安装https://splunkbase.splunk.com/app/3743/这个插件将解决您的问题。
Just want to update with the solution what we tried, this worked for our log structure只想用我们尝试过的解决方案更新,这对我们的日志结构有效
SEDCMD-1_unjsonify = s/{"log":"(?:\\u[0-9]+)?(.*?)\\n","stream.*/\1/g
SEDCMD-2_unescapequotes = s/\\"/"/g
BREAK_ONLY_BEFORE={"logname":
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.