简体   繁体   中英

OpenTelemetry doesn't show metrics in Prometheus

I've setup OpenTelemetry in Kubernetes. Below is my config.

exporters:
  logging: {}
extensions:
  health_check: {}
  memory_ballast: {}
processors:
  batch: {}
  memory_limiter:
    check_interval: 5s
    limit_mib: 819
    spike_limit_mib: 256
receivers:
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_compact:
        endpoint: 0.0.0.0:6831
      thrift_http:
        endpoint: 0.0.0.0:14268
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  prometheus:
    config:
      scrape_configs:
      - job_name: opentelemetry-collector
        scrape_interval: 10s
        static_configs:
        - targets:
          - ${MY_POD_IP}:8888
  zipkin:
    endpoint: 0.0.0.0:9411
service:
  extensions:
  - health_check
  - memory_ballast
  pipelines:
    logs:
      exporters:
      - logging
      processors:
      - memory_limiter
      - batch
      receivers:
      - otlp
    metrics:
      exporters:
      - logging
      processors:
      - memory_limiter
      - batch
      receivers:
      - otlp
      - prometheus
    traces:
      exporters:
      - logging
      processors:
      - memory_limiter
      - batch
      receivers:
      - otlp
      - jaeger
      - zipkin
  telemetry:
    metrics:
      address: 0.0.0.0:8888

The endpoint is showing as up in Prometheus. But it doesn't show any data. When I check the OTEL collector logs, it shows as below

在此处输入图像描述

I have manually added the scrape config in Prometheus.

scrape_configs:

  - job_name: 'otel-collector'
    scrape_interval: 10s
    static_configs:
     - targets: ['opentelemetry-collector.opentelemetry:8888']

So in OTEL collectore configmap I also see Prometheus scrape config.

  prometheus:
    config:
      scrape_configs:
      - job_name: opentelemetry-collector
        scrape_interval: 10s
        static_configs:
        - targets:
          - ${MY_POD_IP}:8888

--Added--New--

kubectl get all -n thanos
NAME                                        READY   STATUS    RESTARTS   AGE
pod/thanos-query-776688f499-pvm24           1/1     Running   0          14h
pod/thanos-query-frontend-5b55d44cc-b6qx5   1/1     Running   0          14h

NAME                            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)              AGE
service/thanos-query            ClusterIP   10.0.112.105   <none>        9090/TCP,10901/TCP   14h
service/thanos-query-frontend   ClusterIP   10.0.223.246   <none>        9090/TCP             14h

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/thanos-query            1/1     1            1           14h
deployment.apps/thanos-query-frontend   1/1     1            1           14h

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/thanos-query-776688f499           1         1         1       14h
replicaset.apps/thanos-query-frontend-5b55d44cc   1         1         1       14h

--Logs--

2022-06-07T07:20:49.852Z        error   exporterhelper/queued_retry.go:183      Exporting failed. The error is not retryable. Dropping data.    {"kind": "exporter", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: remote write returned HTTP status 404 Not Found; err = <nil>: 404 page not found\n", "dropped_items": 18}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
        go.opentelemetry.io/collector@v0.51.0/exporter/exporterhelper/queued_retry.go:183
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send
        go.opentelemetry.io/collector@v0.51.0/exporter/exporterhelper/metrics.go:132
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
        go.opentelemetry.io/collector@v0.51.0/exporter/exporterhelper/queued_retry_inmemory.go:118
go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume
        go.opentelemetry.io/collector@v0.51.0/exporter/exporterhelper/internal/bounded_memory_queue.go:82
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2
        go.opentelemetry.io/collector@v0.51.0/exporter/exporterhelper/internal/bounded_memory_queue.go:69

I configured the below URL in OpenTelemetry

exporters:
 prometheusremotewrite:
   endpoint: "http://thanos-query-frontend.thanos:9090/api/v1/write"

You have configured only logging exporter, which exports data to the console via zap.Logger only = it doesn't write data to the prometheus.

Configure also prometheusremotewrite exporter and add it to the metric pipeline. Minimalistic example:

receivers:
  prometheus:
    config:
      scrape_configs:
      - job_name: opentelemetry-collector
        scrape_interval: 10s
        static_configs:
        - targets: ['localhost:8888']

exporters:
  prometheusremotewrite:
    endpoint: <example: my-prometheus/api/v1/write>

service:
  pipelines:
    metrics:
      receivers:
        - prometheus
      exporters:
        - prometheusremotewrite
  telemetry:
    metrics:
      address: 0.0.0.0:8888
      level: basic

See doc for prometheusremotewrite exporter: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter

See https://grafana.com/grafana/dashboards/15983 if you want to have a Grafana dashboard for OpenTelemetry collector telemetry metrics.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM