简体   繁体   English

Openshift 3.11记录到外部ElasticSearch实例

[英]Openshift 3.11 logging to external ElasticSearch instance

I have an external ElasticSearch instance that I'd like Fluentd and Kibana to leverage accordingly in OSE 3.11. 我有一个外部ElasticSearch实例,我希望Fluentd和Kibana在OSE 3.11中能够相应地加以利用。 The ES instance is insecure at the moment, as this is simply a internal pilot. 目前,ES实例尚不安全,因为这只是内部飞行员。 Based on the OSE docs here ( https://docs.openshift.com/container-platform/3.11/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance ), I should be able to update a number of ES_* variables accordingly in the ElasticSearch deployment config. 基于此处的OSE文档( https://docs.openshift.com/container-platform/3.11/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance ),我应该能够更新 ElasticSearch部署配置中相应地包含许多ES_ *变量。 The first issue is, the variables referenced in the docs don't exist in the ElasticSearch deployment config. 第一个问题是,文档中引用的变量在ElasticSearch部署配置中不存在。

Secondly, I tried updating these values via the inventory file. 其次,我尝试通过清单文件更新这些值。 For example, for the property openshift_logging_es_host , the description claims: The name of the Elasticsearch service where Fluentd should send logs. 例如,对于属性openshift_logging_es_host ,描述声称: Fluentd应该在其中发送日志的Elasticsearch服务的名称。

These were the values in my inventory file: 这些是我的清单文件中的值:

openshift_logging_install_logging=true
openshift_logging_es_ops_nodeselector={'node-role.kubernetes.io/infra':'true'}
openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'}
openshift_logging_es_host='169.xx.xxx.xx'
openshift_logging_es_port='9200'
openshift_logging_es_ops_host='169.xx.xxx.xx'
openshift_logging_es_ops_port='9200'
openshift_logging_kibana_env_vars={'ELASTICSEARCH_URL':'http://169.xx.xxx.xx:9200'}
openshift_logging_es_ca=none
openshift_logging_es_client_cert=none
openshift_logging_es_client_key=none
openshift_logging_es_ops_ca=none
openshift_logging_es_ops_client_cert=none
openshift_logging_es_ops_client_key=none

The only variable above that seems to stick after uninstall/install of logging is openshift_logging_kibana_env_vars. 在卸载/安装日志后,似乎唯一粘在上面的变量是openshift_logging_kibana_env_vars。 I'm not sure why the others weren't respected - perhaps I'm missing one that triggers use of these vars. 我不确定为什么不尊重其他人-也许我想念一个触发使用这些变量的人。

In any case, after those attempts failed, I eventually found the values set on the logging-fluentd Daemon Set. 无论如何,在这些尝试失败之后,我最终找到了在日志记录流畅的守护程序集上设置的值。 I can edit via CLI or the console to set the es host, port, client keys, certs, etc. I also set the ops equivalents. 我可以通过CLI或控制台进行编辑以设置es主机,端口,客户端密钥,证书等。我还设置了ops等效项。 The fluentd logs confirms these values are set, however, it's attempting to use https in conjunction with the default fluentd/changeme id/pwd combo. 流利的日志确认已设置了这些值,但是,它尝试将https与默认的fluentd / changeme id / pwd组合一起使用。

2019-03-08 11:49:00 -0600 [warn]: temporarily failed to flush the buffer. next_retry=2019-03-08 11:54:00 -0600 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"169.xx.xxx.xx\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"})!" plugin_id="elasticsearch-apps"

So, ideally, I'd like to set these as inventory variables, and everything just works. 因此,理想情况下,我想将它们设置为库存变量,并且一切正常。 If anybody has a suggestion to fix that issue, please let me know. 如果有人建议解决该问题,请告诉我。

Less than ideal, I can modify the ES deployment config or the Fluentd Dameon Set post-install and set the values required, assuming someone knows how to avoid https? 不太理想,我可以修改ES部署配置或Fluentd Dameon Set安装后并设置所需的值,假设有人知道如何避免使用https?

Thanks for any input you might have. 感谢您的输入。

Update: 更新:

I managed to get this working, but not via the properties documented or the provided suggestion. 我设法做到了这一点,但是没有通过记录的属性或提供的建议来实现。 I ended up going through the various playbooks to identify the vars being used. 最后,我遍历了各种剧本来确定正在使用的var。 I also had to setup mutual TLS, as when I specified the cert files locations to be none/undefined, the logs indicated a 'File not found'. 我还必须设置双向TLS,因为当我指定证书文件的位置为none / undefined时,日志指示“找不到文件”。 Essentially, none or undefined gets translated to "", which it tries to open as a file. 本质上,没有或未定义的内容会翻译为“”,并尝试将其作为文件打开。 So, this was the magic combination of properties that will get you 99.9% of the way. 因此,这是属性的神奇组合,可让您获得99.9%的收益。

openshift_logging_es_host=169.xx.xxx.xxx
openshift_logging_fluentd_app_host=169.xx.xxx.xxx
openshift_logging_fluentd_ops_host=169.xx.xxx.xxx
openshift_logging_fluentd_ca_path='/tmp/keys/client-ca.cer'
openshift_logging_fluentd_key_path='/tmp/keys/client.key'
openshift_logging_fluentd_cert_path='/tmp/keys/client.cer'
openshift_logging_fluentd_ops_ca_path='/tmp/keys/client-ca.cer'
openshift_logging_fluentd_ops_key_path='/tmp/keys/client.key'
openshift_logging_fluentd_ops_cert_path='/tmp/keys/client.cer'

Notes: 笔记:

  • You need to copy the keys to /tmp/keys prior. 您需要/tmp/keys复制到/tmp/keys
  • Upon completion, you will notice that OPS_HOST will not be set on the Daemon Set. 完成后,您会注意到将不会在守护程序集上设置OPS_HOST。 I left it in the properties above as I think it's just a bug, and perhaps will be fixed beyond 3.11 which is what I'm using. 我把它留在上面的属性中,因为我认为它只是一个错误,也许会在我正在使用的3.11之后修复。 To adjust this simply oc edit ds/logging-fluentd and modify accordingly. 要进行调整,只需oc edit ds/logging-fluentd并进行相应修改。

With these changes, the log data gets sent to my external ES instance. 通过这些更改,日志数据将发送到我的外部ES实例。

My suggestion is a less ideal solution which is sending logs to external log aggregator using secure-forward.conf , refer around Configuring Fluentd to Send Logs to an External Log Aggregator section for more defails. 我的建议是一个不太理想的解决方案,该解决方案是使用secure-forward.conf forward.conf将日志发送到外部日志聚合器,有关更多故障,请参考配置流利的将日志发送到外部日志聚合器部分。

You can configure elasticsearch output plugin as well as secure_forward plugin without https . 您可以配置无https secure_forward 输出插件以及secure_forward插件。

For instnace, 对于Instnace,

# oc edit cm logging-fluentd -n openshift-logging
...
  secure-forward.conf: |
    <store>
      @type elasticsearch
      host external.es.example.com
      port 9200
    </store>
...

UPDATE : I've tested against a external fluentd instead of ES , because I have not a external ES instance in my hand. 更新 :我已经针对外部fluentd而不是ES进行了测试,因为我手中没有外部ES实例。 For checking log activation, I also printed out logs as file during test. 为了检查日志激活,我还在测试期间将日志作为文件打印了出来。

  secure-forward.conf: |
    <store>
    @type forward
     <server>
       host external.fluented.example.com
       port 24224
     </server>
    </store>
    <store>
    @type file
    path /var/log/secure-forward-test.log
    </store>

I've verified above configuration can transfer the logs to external fluentd and local log files. 我已验证上述配置可以将日志传输到外部fluentd本地日志文件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM