简体   繁体   English

将Kubernetes群集日志发送到AWS Elasticsearch

[英]Send Kubernetes cluster logs to AWS Elasticsearch

I have a testing Kubernetes cluster and I created elasticsearch on AWS which include Kibana for the log management. 我有一个测试Kubernetes集群,我在AWS上创建了包含Kibana用于日志管理的elasticsearch。

Endpoint: https://search-this-is-my-es-wuktx5la4txs7avvo6ypuuyri.ca-central-1.es.amazonaws.com 端点: https//search-this-is-my-es-wuktx5la4txs7avvo6ypuuyri.ca-central-1.es.amazonaws.com

As far as I googled, I have to send logs from fluentd. 据我搜索,我必须从流利的发送日志。 Then I tried to implement DaemonSet using this article . 然后我尝试使用本文实现DaemonSet。 No luck. 没运气。

Could you please share any good documentation to me, please 请你帮我分享一下好的文件

Kibana provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Kibana在Elasticsearch集群上索引的内容之上提供可视化功能。 Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data. 用户可以在大量数据之上创建条形图,折线图和散点图,或饼图和贴图。

To push log data into Elasticsearch, mostly people uses logstash/fluentd(log/data collectors) 要将日志数据推送到Elasticsearch,大多数人都使用logstash / fluentd(日志/数据收集器)

Checkout below links for more info: 结帐以下链接了解更多信息:

https://www.elastic.co/webinars/introduction-elk-stack https://www.elastic.co/webinars/introduction-elk-stack

https://logz.io/blog/fluentd-logstash/ https://logz.io/blog/fluentd-logstash/

I had a similar problem. 我遇到了类似的问题。 Below are the full details of how I got it working. 以下是我如何运作的全部细节。

Setup: 建立:

  • AWS ES instance accessible via a VPC. 可通过VPC访问AWS ES实例。
  • Using this yaml file as a template. 使用此yaml文件作为模板。
  • k8s client version v1.9.2 k8s客户端版本v1.9.2
  • k8s server version v1.8.7 k8s服务器版本v1.8.7

Host problem: 主机问题:

The main problem I had was with defining the environment variables correctly. 我遇到的主要问题是正确定义环境变量。 For FLUENT_ELASTICSEARCH_HOST , I was including the https:// prefix on the host URL. 对于FLUENT_ELASTICSEARCH_HOST ,我在主机URL上包含了https://前缀。 Once I removed that, my connection problems went away. 删除后,我的连接问题就消失了。

Authentication: 验证:

There's no username or password configured for AWS ES. 没有为AWS ES配置用户名或密码。 Per this discussion , I set the FLUENT_ELASTICSEARCH_USER and FLUENT_ELASTICSEARCH_PASSWORD values to null. 根据此讨论 ,我将FLUENT_ELASTICSEARCH_USERFLUENT_ELASTICSEARCH_PASSWORD值设置为null。

Sample configuration: 示例配置:

Here's the full set of environment variables in my daemonset yaml file: 这是我的守护进程yaml文件中的完整环境变量集:

- name:  FLUENT_ELASTICSEARCH_HOST
  value: "vpc-MY-DOMAIN.REGION.es.amazonaws.com"
- name:  FLUENT_ELASTICSEARCH_PORT
  value: "443"
- name: FLUENT_ELASTICSEARCH_SCHEME
  value: "https"
- name: FLUENT_ELASTICSEARCH_USER
  value: null
- name: FLUENT_ELASTICSEARCH_PASSWORD
  value: null

Bonus: connecting to Kibana 奖金:连接到Kibana

Instead of setting up AWS Cognito, I created an nginx pod in my kubernetes cluster that I use as a proxy to reach Kibana. 我没有设置AWS Cognito,而是在我的kubernetes集群中创建了一个nginx pod,我用它作为代理来到达Kibana。 I use the kubectl port-foward command to reach the nginx server from my local machine. 我使用kubectl port-foward命令从本地计算机到达nginx服务器。

Here's my nginx.conf: 这是我的nginx.conf:

server {
  listen 80;
  listen [::]:80;

  server_name MY-DOMAIN;

  location /_plugin/kibana {
      proxy_pass https://vpc-MY-DOMAIN.REGION.es.amazonaws.com/_plugin/kibana;
  }
  location / {
      proxy_pass https://vpc-MY-DOMAIN.REGION.es.amazonaws.com;
  }
}

Once the nginx pod is deployed, I run this command: 部署nginx pod后,我运行以下命令:

kubectl port-forward POD_NAME 8888:80

Now the Kibana is accessible at http://localhost:8888/_plugin/kibana 现在可以通过http:// localhost:8888 / _plugin / kibana访问Kibana

I'm still having a timeout issue with the port-foward command and a problem with nginx caching the ES service IP (since that can change), but I'll update my response once I resolve those issues. 我仍然遇到port-foward命令的超时问题以及nginx缓存ES服务IP的问题(因为这可能会改变),但是一旦我解决了这些问题,我就会更新我的响应。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM