简体   繁体   English

普罗米修斯自定义指标未出现在 custom.metrics kubernetes

[英]Prometheus custom metric does not appear in custom.metrics kubernetes

I configure all of the following configurations but the request_per_second does not appear when I type the command我配置了以下所有配置,但是当我键入命令时没有出现 request_per_second

kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

In the node.js that should be monitored I installed prom-client, I tested the /metrics and it's working very well and the metric "resquest_count" is the object it returns在应该监控的 node.js 中,我安装了 prom-client,我测试了 /metrics 并且它工作得很好,度量标准“resquest_count”是它返回的对象

Here are the important parts of that node code以下是该节点代码的重要部分

(...)
const counter = new client.Counter({
  name: 'request_count',
  help: 'The total number of processed requests'
});
(...)

router.get('/metrics', async (req, res) => {
  res.set('Content-Type', client.register.contentType)
  res.end(await client.register.metrics())
})

This is my service monitor configuration这是我的服务监视器配置

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: un1qnx-validation-service-monitor-node
  namespace: default
  labels:
    app: node-request-persistence
    release: prometheus
spec:
  selector:
    matchLabels:
      app: node-request-persistence
  endpoints:
  - interval: 5s
    path: /metrics
    port: "80"
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
  namespaceSelector:
    matchNames:
    - un1qnx-aks-development

This the node-request-persistence configuration这是 node-request-persistence 配置

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: node-request-persistence
  namespace: un1qnx-aks-development
  name: node-request-persistence
spec:
  selector:
    matchLabels:
      app: node-request-persistence
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: /metrics
        prometheus.io/port: "80"
      labels:
        app: node-request-persistence
    spec:
      containers:
      - name: node-request-persistence
        image: node-request-persistence
        imagePullPolicy: Always # IfNotPresent
        resources:
          requests:
            memory: "200Mi" # Gi
            cpu: "100m"
          limits:
            memory: "400Mi"
            cpu: "500m"
        ports:
        - name: node-port
          containerPort: 80

This is the prometheus adapter这是普罗米修斯适配器

prometheus:
  url: http://prometheus-server.default.svc.cluster.local
  port: 9090
rules:
  custom:
  - seriesQuery: 'request_count{namespace!="", pod!=""}'
    resources:
      overrides:
        namespace: {resource: "namespace"}
        pod: {resource: "pod"}
    name:
      as: "request_per_second"
    metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))"

This is the hpa这是hpa

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: un1qnx-validation-service-hpa-angle
  namespace: un1qnx-aks-development
spec:
  minReplicas: 1
  maxReplicas: 10
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: un1qnx-validation-service-angle
  metrics:
  - type: Pods
    pods:
      metric:
        name: request_per_second
      target:
        type: AverageValue
        averageValue: "5"

The command命令

kubectl get hpa -n un1qnx-aks-development kubectl get hpa -n un1qnx-aks-development

results in "unknown/5"结果是“未知/ 5”

Also, the command此外,命令

kubectl get --raw "http://prometheus-server.default.svc.cluster.local:9090/api/v1/series" kubectl get --raw "http://prometheus-server.default.svc.cluster.local:9090/api/v1/series"

Results in结果是

Error from server (NotFound): the server could not find the requested resource来自服务器的错误 (NotFound):服务器找不到请求的资源

I think it should return some value about the collected metrics... I think that the problem is from the service monitor, but I am new to this我认为它应该返回一些关于收集到的指标的值......我认为问题出在服务监视器上,但我对此很陌生

As you noticed I am trying to scale a deployment based on another deployment pods, don't know if there is a problem there正如您所注意到的,我正在尝试根据另一个部署 pod 扩展部署,不知道那里是否有问题

I appreciate an answer, because this is for my thesis我很感激答案,因为这是我的论文

kubernetes - version 1.19.9 Kubernetes - 版本 1.19.9

Prometheus - chart prometheus-14.2.1 app version 2.26.0 Prometheus - 图表 prometheus-14.2.1 应用程序版本 2.26.0

Prometheus Adapter - chart 2.14.2 app version 0.8.4 Prometheus Adapter - chart 2.14.2 app version 0.8.4

And all where installed using helm以及所有使用 helm 安装的地方

After some time I found the problems and I changed the following一段时间后,我发现了问题,并更改了以下内容

Changed the port on the prometheus adapter, the time on the query and the names of the resource override.更改了 prometheus 适配器上的端口、查询时间和资源覆盖的名称。 But to know the names of the resources override you need to port forward to the prometheus server and check the labels on the targets page of the app that you are monitoring.但是要知道资源覆盖的名称,您需要将端口转发到 prometheus 服务器并检查您正在监控的应用程序的目标页面上的标签。

prometheus:
  url: http://prometheus-server.default.svc.cluster.local
  port: 80
rules:
  custom:
  - seriesQuery: 'request_count{kubernetes_namespace!="", kubernetes_pod_name!=""}'
    resources:
      overrides:
        kubernetes_namespace: {resource: "namespace"}
        kubernetes_pod_name: {resource: "pod"}
    name:
      matches: "request_count"
      as: "request_count"
    metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>))"

I also added annotations on the deployment yaml我还在部署yaml上添加了注释

spec:
  selector:
    matchLabels:
      app: node-request-persistence
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: /metrics
        prometheus.io/port: "80"
      labels:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 用于水平自动缩放的 kubernetes / prometheus 自定义指标 - kubernetes / prometheus custom metric for horizontal autoscaling 使用Prometheus监控自定义kubernetes pod指标 - Monitor custom kubernetes pod metrics using Prometheus Kubernetes HPA 未在 istio 上使用 prometheus 适配器使用自定义指标进行缩放 - Kubernetes HPA not scaling with custom metric using prometheus adapter on istio Prometheus 添加自定义指标 - Prometheus add custom metrics 使用 Prometheus Operator 监控自定义 kubernetes pod 指标 - Monitor custom kubernetes pod metrics using Prometheus Operator Prometheus适配器自定义指标HPA - Prometheus Adapter custom metrics HPA 如何查看/调试 Kubernetes 自定义指标值 (custom.metrics.k8s.io)? - How to View/Debug Kubernetes Custom Metric Values (custom.metrics.k8s.io)? 如何使用 Prometheus 适配器根据响应时间(自定义指标)在 Kubernetes 中执行 HorizontalPodAutoscaling? - how to perform HorizontalPodAutoscaling in Kubernetes based on response time (custom metric) using Prometheus adapter? Prometheus - Kubernetes 集群指标 - Prometheus - Kubernetes cluster metrics 如何显示使用Golang客户端库从Kubernetes中运行的所有Pod中捕获的Prometheus中的自定义应用程序指标 - How to show custom application metrics in Prometheus captured using the golang client library from all pods running in Kubernetes
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM