[英]K8S - Check certificate validation with Prometheus
I need to find the certificate validation for K8S cluster , eg to use the alert manager to notify when the certificate is about to expire and send sutible notification.我需要找到 K8S 集群的证书验证,例如使用警报管理器来通知证书即将到期并发送适当的通知。
I found this repo but not I'm not sure how configure it, what is the target and how to achieve it?我找到了这个repo但不是我不确定如何配置它,目标是什么以及如何实现它?
https://github.com/ribbybibby/ssl_exporter https://github.com/ribbybibby/ssl_exporter
which based on the black-box exporter基于黑盒导出器
https://github.com/prometheus/blackbox_exporter https://github.com/prometheus/blackbox_exporter
- job_name: "ssl"
metrics_path: /probe
static_configs:
- targets:
- 127.0.0.1
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219 # SSL exporter.
I want to check the current K8S cluster (where Prometheus is deployed) , to see whether the certificate is valid or not.我想检查当前的 K8S 集群(部署 Prometheus 的地方),以查看证书是否有效。 What should I put there inside the target to make it work?
我应该在目标里面放什么才能让它工作?
Do I need to expose something in cluster ?我需要在集群中公开一些东西吗?
update This is where out certificate located in the system更新这是系统中out证书的位置
tls:
mode: SIMPLE
privateKey: /etc/istio/bide-tls/tls.key
serverCertificate: /etc/istio/bide-tls/tls.crt
My scenario is:我的场景是:
Prometheus and the ssl_exporter are in the same cluster, that the certificate which they need to check is in the same cluster also. Prometheus 和 ssl_exporter 在同一个集群中,他们需要检查的证书也在同一个集群中。 (see the config above)
(见上面的配置)
What should I put there inside the target to make it work?
我应该在目标里面放什么才能让它工作?
I think the "Targets" section of the readme is clear: it contains the endpoints that you wish the monitor to report on:我认为 自述文件的 “目标”部分很清楚:它包含您希望监视器报告的端点:
static_configs:
- targets:
- kubernetes.default.svc.cluster.local:443
- gitlab.com:443
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
# rewrite to contact the SSL exporter
replacement: 127.0.0.1:9219
Do I need to expose something in cluster ?
我需要在集群中公开一些东西吗?
Depends on if you want to report on internal certificates, or whether the ssl_exporter
can reach the endpoints you want.取决于您是否要报告内部证书,或者
ssl_exporter
是否可以到达您想要的端点。 For example, in the snippet above, I used the KubeDNS name kubernetes.default.svc.cluster.local
with the assumption that ssl_exporter
is running as a Pod within the cluster.例如,在上面的代码片段中,我使用 KubeDNS 名称
kubernetes.default.svc.cluster.local
并假设ssl_exporter
作为集群中的 Pod 运行。 If that doesn't apply to you, the you would want to change that endpoint to be k8s.my-cluster-dns.example.com:6443
or whatever your kubernetes API is listening upon that your kubectl
can reach.如果这不适用于您,您可能希望将该端点更改为
k8s.my-cluster-dns.example.com:6443
或您的 kubernetes API 正在侦听的kubectl
可以访问的任何内容。
Then, in the same vein, if both prometheus and your ssl_exporter are running inside the cluster, then you would change replacement:
to be the Service
IP address that is backed by your ssl_exporter Pods.然后,同样,如果 prometheus 和您的 ssl_exporter 都在集群内运行,那么您需要将
replacement:
更改为replacement:
由您的 ssl_exporter Pod 支持的Service
IP 地址。 If prometheus is outside the cluster and ssl_monitor is inside the cluster, then you'll want to create a Service
of type: NodePort
so you can point your prometheus at one (or all?) of the Node IP addresses and the NodePort
upon which ssl_exporter is listening如果 prometheus 在集群外而 ssl_monitor 在集群内,那么您需要创建一个
type: NodePort
的Service
,以便您可以将 prometheus 指向一个(或全部?)节点 IP 地址和NodePort
的 NodePort正在听
The only time one would use the literal 127.0.0.1:9219
is if prometheus and the ssl_exporter are running on the same machine or in the same Pod, since that's the only way that 127.0.0.1 is meaningful from prometheus's point of view唯一一次使用文字
127.0.0.1:9219
是 prometheus 和 ssl_exporter 运行在同一台机器上或同一 Pod 中,因为从普罗米修斯的角度来看,这是 127.0.0.1 有意义的唯一方式
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.