简体   繁体   中英

Issue with monitoring custom service on prometheus in kubernetes namespace

My goal is to monitor services with Prometheus, so I was following a guide located at:

https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md

I am relatively new to all of this, so please forgive my naiveness. I tried looking into the error, but all the answers were convoluted. I have no idea where to start on the debug process (perhaps look into the YAMLs?)

I wanted to monitor a custom Service. So, I deployed a service.yaml of the following into a custom namespace (t):

kind: Service
apiVersion: v1
metadata:
  namespace: t
  name: example-service-test
  labels:
    app: example-service-test
spec:
  selector:
    app: example-service-test
  type: NodePort
  ports:
  - name: http
    nodePort: 30901
    port: 8080
    protocol: TCP
    targetPort: http
---
apiVersion: v1
kind: Pod
metadata:
  name: example-service-test
  namespace: t
  labels:
    app: example-service-test
spec:
  containers:
  - name: example-service-test
    image: python:2.7
    imagePullPolicy: IfNotPresent
    command: ["/bin/bash"]
    args: ["-c", "echo \"<p>This is POD1 $(hostname)</p>\" > index.html; python -m SimpleHTTPServer 8080"]
    ports:
    - name: http
      containerPort: 8080

And deployed a service monitor into the namespace:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-service-test
  labels:
    team: frontendtest1
  namespace: t
spec:
  selector:
    matchLabels:
      app: example-service-test
  endpoints:
  - port: http

So far, the service monitor is detecting the service, as shown: Prometheus Service Discovery . However, there is an error with obtaining the metrics from the service: Prometheus Targets .

From what I know, prometheus isn't able to access the /metrics on the sample service - in that case, do I need to expose the metrics? If so, could I get a step by step guide solution to how to expose metrics? If not, what route should I take?

I'm afraid you could miss the key thing from the tutorial you're following on CoreOS website, about how a metrics from an app are getting to Prometheus:

First, deploy three instances of a simple example application, which listens and exposes metrics on port 8080

Yes, your application (website) listens on port 8080, but does not expose any metrics on '/metrics' endpoint in the known to Prometheus format.

You can verify about what kind of metrics I'm talking about by hiting the endpoint from inside of Pod/Conatiner where it's hosted.

kubectl exec -it $(kubectl get po -l app=example-app -o jsonpath='{.items[0].metadata.name}') -c example-app -- curl localhost:8080/metrics

You should see similar output to this one:

 # HELP codelab_api_http_requests_in_progress The current number of API HTTP requests in progress. # TYPE codelab_api_http_requests_in_progress gauge codelab_api_http_requests_in_progress 1 # HELP codelab_api_request_duration_seconds A histogram of the API HTTP request durations in seconds. # TYPE codelab_api_request_duration_seconds histogram codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.0001"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00015000000000000001"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00022500000000000002"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.0003375"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00050625"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.000759375"} 0 

Please read more here on ways of exposing metrics.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM