简体   繁体   中英

Prometheus custom metric does not appear in custom.metrics kubernetes

I configure all of the following configurations but the request_per_second does not appear when I type the command

kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

In the node.js that should be monitored I installed prom-client, I tested the /metrics and it's working very well and the metric "resquest_count" is the object it returns

Here are the important parts of that node code

(...)
const counter = new client.Counter({
  name: 'request_count',
  help: 'The total number of processed requests'
});
(...)

router.get('/metrics', async (req, res) => {
  res.set('Content-Type', client.register.contentType)
  res.end(await client.register.metrics())
})

This is my service monitor configuration

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: un1qnx-validation-service-monitor-node
  namespace: default
  labels:
    app: node-request-persistence
    release: prometheus
spec:
  selector:
    matchLabels:
      app: node-request-persistence
  endpoints:
  - interval: 5s
    path: /metrics
    port: "80"
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
  namespaceSelector:
    matchNames:
    - un1qnx-aks-development

This the node-request-persistence configuration

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: node-request-persistence
  namespace: un1qnx-aks-development
  name: node-request-persistence
spec:
  selector:
    matchLabels:
      app: node-request-persistence
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: /metrics
        prometheus.io/port: "80"
      labels:
        app: node-request-persistence
    spec:
      containers:
      - name: node-request-persistence
        image: node-request-persistence
        imagePullPolicy: Always # IfNotPresent
        resources:
          requests:
            memory: "200Mi" # Gi
            cpu: "100m"
          limits:
            memory: "400Mi"
            cpu: "500m"
        ports:
        - name: node-port
          containerPort: 80

This is the prometheus adapter

prometheus:
  url: http://prometheus-server.default.svc.cluster.local
  port: 9090
rules:
  custom:
  - seriesQuery: 'request_count{namespace!="", pod!=""}'
    resources:
      overrides:
        namespace: {resource: "namespace"}
        pod: {resource: "pod"}
    name:
      as: "request_per_second"
    metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))"

This is the hpa

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: un1qnx-validation-service-hpa-angle
  namespace: un1qnx-aks-development
spec:
  minReplicas: 1
  maxReplicas: 10
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: un1qnx-validation-service-angle
  metrics:
  - type: Pods
    pods:
      metric:
        name: request_per_second
      target:
        type: AverageValue
        averageValue: "5"

The command

kubectl get hpa -n un1qnx-aks-development

results in "unknown/5"

Also, the command

kubectl get --raw "http://prometheus-server.default.svc.cluster.local:9090/api/v1/series"

Results in

Error from server (NotFound): the server could not find the requested resource

I think it should return some value about the collected metrics... I think that the problem is from the service monitor, but I am new to this

As you noticed I am trying to scale a deployment based on another deployment pods, don't know if there is a problem there

I appreciate an answer, because this is for my thesis

kubernetes - version 1.19.9

Prometheus - chart prometheus-14.2.1 app version 2.26.0

Prometheus Adapter - chart 2.14.2 app version 0.8.4

And all where installed using helm

After some time I found the problems and I changed the following

Changed the port on the prometheus adapter, the time on the query and the names of the resource override. But to know the names of the resources override you need to port forward to the prometheus server and check the labels on the targets page of the app that you are monitoring.

prometheus:
  url: http://prometheus-server.default.svc.cluster.local
  port: 80
rules:
  custom:
  - seriesQuery: 'request_count{kubernetes_namespace!="", kubernetes_pod_name!=""}'
    resources:
      overrides:
        kubernetes_namespace: {resource: "namespace"}
        kubernetes_pod_name: {resource: "pod"}
    name:
      matches: "request_count"
      as: "request_count"
    metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>))"

I also added annotations on the deployment yaml

spec:
  selector:
    matchLabels:
      app: node-request-persistence
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: /metrics
        prometheus.io/port: "80"
      labels:

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM