简体   繁体   中英

Inter-pod communication in kubernetes

I had previously created a pod with three containers namely - prometheus , blackbox-exporter and python-access-api . The blackbox-exporter runs on port 9115 and scrapes the targets generated by python-access-api container which is alerted in the prometheus for SSL Expiry certificate of the targets. Now I want to move the blackbox- exporter to a different pod. I have tried to establish this via service but I am failing to establish the communication between prometheus and blackbox-exporter now, since they are in a different pod. And as a result of this, I am unable to make probe for SSL Expiry certificate and hence, cannot see the alerts on prometheus . Below is the yaml file that I have used, can anyone please point out a way out of this problem. Please note that my configuration looks fine for prometheus and also the pods for blackbox and prometheus are running fine individually. Like I said above, I dont see they communicate with each other.

apiVersion: v1
kind: ReplicationController
metadata:
  name: blackbox-deployment
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    app: blackbox
  template:
    metadata:
      name: blackbox
      labels:
        app: blackbox
    spec:
      containers:
      - name: blackbox

Yaml file for Prometheus deployment

apiVersion: v1
kind: ReplicationController
metadata:
  name: python-daemon
  labels:
    app: prometheus-python
spec:
  replicas: 1
  selector:
    app: python
  template:
    metadata:
      name: python
      labels:
        app: python
    spec:
      containers:

The service that I have deployed:-

apiVersion: v1
kind: Service
metadata:
  name: prometheus
spec:
  selector:
    app: prometheus
  ports:
  - name: http
    port: 80
    targetPort: 9115
    protocol: TCP

The prometheus config is as follows

- job_name: blackbox
  params:
    module:
    - http_2xx
  scrape_interval: 1m
  scrape_timeout: 10s
  metrics_path: /probe
  scheme: http
  file_sd_configs:
  - files:
    - /var/suhas/targets.yml
    refresh_interval: 5m
  relabel_configs:
  - source_labels: [__address__]
    separator: ;
    regex: (.*)
    target_label: __param_target
    replacement: $1
    action: replace
  - source_labels: [__param_target]
    separator: ;
    regex: (.*)
    target_label: instance
    replacement: $1
    action: replace
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: __address__
    replacement: prometheus:9115
    action: replace

First of all, you would need to clarify which communications are taking place. Is prometheus accessing blackbox exporter via port 9115, or is blackbox exporter accessing prometheus via port 80? Depending on which is right, the service would be different.

In your service above, you are setting and endpoint which, when accessed via port 80, it will redirect traffic to port 9115 of your blackbox-exporter app. I will assume that prometheus is accessing the blackbox-exporter for the rest of the answer.

Would prometheus access blackbox exporter via port 9115 or port 80. It looks to me that, in your initial set up, the prometheus would use port 9115 to access. Therefore, there is no reason to change the port in the service to 80. Could you try setting port: 9115 in your service file instead?

Plus, make sure you configure prometheus to use the correct address. I assume that it would previously use 127.0.0.1:9115, now it would need to be prometheus:9115 (as you named the service prometheus , which can be bit confusing).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM