繁体   English   中英

服务器找不到 Pod 的度量 nginx_vts_server_requests_per_second

[英]the server could not find the metric nginx_vts_server_requests_per_second for pods

我安装了kube-prometheus-0.9.0 ,并希望使用以下资源清单文件部署一个示例应用程序来测试 Prometheus 指标自动缩放:(hpa-prome-demo.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-prom-demo
spec:
  selector:
    matchLabels:
      app: nginx-server
  template:
    metadata:
      labels:
        app: nginx-server
    spec:
      containers:
      - name: nginx-demo
        image: cnych/nginx-vts:v1.0
        resources:
          limits:
            cpu: 50m
          requests:
            cpu: 50m
        ports:
        - containerPort: 80
          name: http
---
apiVersion: v1
kind: Service
metadata:
  name: hpa-prom-demo
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "80"
    prometheus.io/path: "/status/format/prometheus"
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: nginx-server
  type: NodePort

出于测试目的,使用了 NodePort 服务,幸运的是我可以在应用部署后获得 http 响应。 然后我通过 Helm Chart 安装了 Prometheus Adapter,方法是创建一个新的hpa-prome-adapter-values.yaml文件来覆盖默认的 Values 值,如下所示。

rules:
  default: false
  custom:
  - seriesQuery: 'nginx_vts_server_requests_total'
    resources:
      overrides:
        kubernetes_namespace:
          resource: namespace
        kubernetes_pod_name:
          resource: pod
    name:
      matches: "^(.*)_total"
      as: "${1}_per_second"
    metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))

prometheus:
  url: http://prometheus-k8s.monitoring.svc
  port: 9090

添加了规则规则并指定 Prometheus 的地址。 使用以下命令安装 Prometheus-Adapter。

$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):

  kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

最后adatper安装成功,可以得到http响应,如下。

$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl    1/1     Running   0          133m

$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "namespaces/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}


但本来应该是这样的,

$  kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "namespaces/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "pods/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

为什么我无法获取指标pods/nginx_vts_server_requests_per_second 结果,下面的查询也失败了。

 kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods

有人云请帮忙吗? 非常感谢。

值得知道的是,使用kube-prometheus存储库,您还可以安装Prometheus Adapter for Kubernetes Metrics APIs等组件,因此无需与 Helm 单独安装。

我将使用您的hpa-prome-demo.yaml清单文件来演示如何监控nginx_vts_server_requests_total指标。


首先,我们需要安装 Prometheus 和 Prometheus Adapter 并按照下面的步骤进行适当的配置。

复制kube-prometheus存储库并参考Kubernetes 兼容性矩阵以选择兼容的分支:

$ git clone https://github.com/prometheus-operator/kube-prometheus.git 
$ cd kube-prometheus
$ git checkout release-0.9

安装jbjsonnetgojsontoyaml工具:

$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest
$ go install github.com/google/go-jsonnet/cmd/jsonnet@latest
$ go install github.com/brancz/gojsontoyaml@latest 

example.jsonnet文件中取消注释(import 'kube-prometheus/addons/custom-metrics.libsonnet') +行:

$ cat example.jsonnet
local kp =
  (import 'kube-prometheus/main.libsonnet') +
  // Uncomment the following imports to enable its patches
  // (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
  // (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
  // (import 'kube-prometheus/addons/node-ports.libsonnet') +
  // (import 'kube-prometheus/addons/static-etcd.libsonnet') +
  (import 'kube-prometheus/addons/custom-metrics.libsonnet') +          <--- This line
  // (import 'kube-prometheus/addons/external-metrics.libsonnet') +
...

将以下规则添加到./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet文件的rules+部分:

      {
        seriesQuery: "nginx_vts_server_requests_total",
        resources: {
          overrides: {
            namespace: { resource: 'namespace' },
            pod: { resource: 'pod' },
          },
        },
        name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
        metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
      },

此次更新后,. ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet文件应如下所示:
注意:这不是整个文件,只是其中的重要部分。

$ cat custom-metrics.libsonnet
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links

{
  values+:: {
    prometheusAdapter+: {
      namespace: $.values.common.namespace,
      // Rules for custom-metrics
      config+:: {
        rules+: [
          {
            seriesQuery: "nginx_vts_server_requests_total",
            resources: {
              overrides: {
                namespace: { resource: 'namespace' },
                pod: { resource: 'pod' },
              },
            },
            name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
            metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
          },
...

使用 jsonnet-bundler 更新功能更新kube-prometheus依赖:

$ jb update

编译清单:

$ ./build.sh example.jsonnet

现在只需使用kubectl根据您的配置安装 Prometheus 和其他组件:

$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/

配置 Prometheus 后,我们可以部署一个示例hpa-prom-demo部署:
注意:我删除了注释,因为我将使用ServiceMonitor来描述要由 Prometheus 监控的目标集。

$ cat hpa-prome-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-prom-demo
spec:
  selector:
    matchLabels:
      app: nginx-server
  template:
    metadata:
      labels:
        app: nginx-server
    spec:
      containers:
      - name: nginx-demo
        image: cnych/nginx-vts:v1.0
        resources:
          limits:
            cpu: 50m
          requests:
            cpu: 50m
        ports:
        - containerPort: 80
          name: http
---
apiVersion: v1
kind: Service
metadata:
  name: hpa-prom-demo
  labels:
    app: nginx-server
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: nginx-server
  type: LoadBalancer

接下来,创建一个ServiceMonitor来描述如何监控我们的 NGINX:

$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  name: hpa-prom-demo
  labels:
    app: nginx-server
spec:
  selector:
    matchLabels:
      app: nginx-server
  endpoints:
  - interval: 15s
    path: "/status/format/prometheus"
    port: http

等待一段时间后,让我们检查hpa-prom-demo日志以确保它被正确报废:

$ kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
hpa-prom-demo-bbb6c65bb-49jsh   1/1     Running   0          35m

$ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
...
10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
...

最后,我们可以检查我们的指标是否按预期工作:

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
      "name": "pods/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
--
      "name": "namespaces/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "hpa-prom-demo-bbb6c65bb-49jsh",
        "apiVersion": "/v1"
      },
      "metricName": "nginx_vts_server_requests_per_second",
      "timestamp": "2022-02-04T09:32:59Z",
      "value": "533m",
      "selector": null
    }
  ]
}

环境

  1. helm 从prometheus-community https://prometheus-community.github.io/helm-chart安装所有 Prometheus 图表
  2. 由 docker for mac 启用的 k8s 集群

解决方案
我在 Prometheus UI 中遇到了同样的问题,我发现它具有namespace label 并且没有pod label 的指标如下。

nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}

我认为 Prometheus 可能不会pod用作 label,所以我检查了 Prometheus 配置并发现:

121       - action: replace
122         source_labels:
123         - __meta_kubernetes_pod_node_name
124         target_label: node

然后搜索https://prometheus.io/docs/prometheus/latest/configuration/configuration/并在我搜索的每个__meta_kubernetes_pod_node_name下执行类似的操作(即。 2个地方)

125       - action: replace
126         source_labels:
127         - __meta_kubernetes_pod_name
128         target_label: pod

过了一会儿,重新加载了 configmap,UI 和 API 可以找到pod label

$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq                                    
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "pods/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "namespaces/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM