简体   繁体   English

如何从专用网络导出 kubernetes 节点/pod 日志

[英]How do I export kubernetes node/pod logs from a private network

I have an application running on microk8s at multiple edge server locations bounded by trusted VPN network.我有一个应用程序在受信任的 VPN 网络限制的多个边缘服务器位置的 microk8s 上运行。 There are no inbound accesses allowed.不允许入站访问。 No ssh nor rdp.没有 ssh 也没有 rdp。 We're only allowed rdp during the time of initial setup or during trimester maintenance windows.我们仅在初始设置期间或三个月维护期间允许使用 rdp windows。 I'd want to scrape the logs from the kubernetes pods and send them through the network (outbound is allowed...) and have them visualized on the grafana hosted somewhere on the cloud.我想从 kubernetes pod 中抓取日志并通过网络发送它们(允许出站......)并将它们可视化在托管在云中某处的 grafana 上。 Are there any out of the box solutioning that serves something like this, Azure and AWS has somekind of prometheus services that does the same inside a VPC.是否有任何开箱即用的解决方案可以提供这样的服务,Azure 和 AWS 有某种普罗米修斯服务,可以在 VPC 内做同样的事情。 but what we are looking at here are private LAN networks.但我们在这里看到的是私有 LAN 网络。

the problem with prometheus from my point of view is that it's a pull technology which depends on prometheus engine to pull requests from the exporters "pods in your case" so it's an inbound connection.从我的角度来看,prometheus 的问题在于它是一种拉取技术,它依赖于 prometheus 引擎从出口商“你的情况下的 pod”中拉取请求,因此它是一个入站连接。

if elasticsearch is a possibility, you can easily setup your elasticsearch and kibana "visualization tool" outside your VPN network and then install logstash/fluentbit or even filebeat as a sidecar within your pods.如果 elasticsearch 是可能的,您可以轻松地在您的 VPN 网络之外设置您的 elasticsearch 和 kibana“可视化工具”,然后在您的 pod 中安装 logstash/fluentbit 甚至 filebeat 作为 sidecar。 these sidecar's main job would be scraping your logs and send it to the elasticsearch nodes "push tech".这些边车的主要工作是抓取您的日志并将其发送到 elasticsearch 节点“推送技术”。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM