简体   繁体   中英

HPA is unable to fetch metrics from custom metrics API

I have setting a prometheus adapter to be able to use prometheus metrics as custom metrics in kubernetes but I have an issue on auto-scaling a deployment with Object HPA.

Here everything i have from custom-metrics api.

```
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/ | jq .
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "services/kong_http_status_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}
```

```
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/services/*/kong_http_status_per_second | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/services/%2A/kong_http_status_per_second"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Service",
        "name": "httpbin",
        "apiVersion": "/v1"
      },
      "metricName": "kong_http_status_per_second",
      "timestamp": "2019-01-08T10:30:25Z",
      "value": "339m"
    },
    {
      "describedObject": {
        "kind": "Service",
        "name": "httpbin",
        "apiVersion": "/v1"
      },
      "metricName": "kong_http_status_per_second",
      "timestamp": "2019-01-08T10:30:25Z",
      "value": "339m"
    }
  ]
```

And then my HPA :

```
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: httpbin
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: httpbin
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Object
    object: 
      metricName: kong_http_status_per_second
      target: 
        apiVersion: v1
        kind: Service
        name: httpbin
      targetValue: 1
```
```
kubectl describe hpa
Name:                                                httpbin
Namespace:                                           lgrondin
Labels:                                              <none>
Annotations:                                         kubectl.kubernetes.io/last-applied-configuration:
                                                       {"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"httpbin","namespace":"lgrondin"}...
CreationTimestamp:                                   Tue, 08 Jan 2019 10:22:24 +0100
Reference:                                           Deployment/httpbin
Metrics:                                             ( current / target )
  "kong_http_status_per_second" on Service/httpbin:  <unknown> / 1
Min replicas:                                        2
Max replicas:                                        10
Deployment pods:                                     2 current / 2 desired
Conditions:
  Type           Status  Reason                 Message
  ----           ------  ------                 -------
  AbleToScale    True    SucceededGetScale      the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetObjectMetric  the HPA was unable to compute the replica count: unable to get metric kong_http_status_per_second: Service on lgrondin httpbin/unable to fetch metrics from custom metrics API: the server could not find the metric kong_http_status_per_second for services
Events:
  Type     Reason                 Age                    From                       Message
  ----     ------                 ----                   ----                       -------
  Warning  FailedGetObjectMetric  87s (x449 over 4h31m)  horizontal-pod-autoscaler  unable to get metric kong_http_status_per_second: Service on lgrondin httpbin/unable to fetch metrics from custom metrics API: the server could not find the metric kong_http_status_per_second for services
```

Seems i can get the metric by calling directly the api but the HPA cannot retrieve the metric.

Thanks for any help.

When I had a similar problem my issue was that the deployment I was attempting to scale was not the one that produced the metric I was relying on.
In my case I wanted to scale a pod and here is what worked for me

  1. I assured that the pod I was attempting to scale was a part of deployment.
  2. I assured that that pod was exposing the desired metrics on a specific port and that Prometheus scraped these metrics using annotation.
  3. I assured that that deployment was on the same namespace as the Hpa pod.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM