简体   繁体   English

无法通过AWS上的公共IP访问Prometheus

[英]Can't access Prometheus from public IP on aws

Use kops install k8s cluster on AWS. 在AWS上使用kops install k8s集群。

Use Helm installed Prometheus : 使用Helm安装的Prometheus

$ helm install stable/prometheus \
    --set server.persistentVolume.enabled=false \
    --set alertmanager.persistentVolume.enabled=false

Then followed this note to do port-forward : 然后按照此说明进行port-forward

Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090

My EC2 instance public IP on AWS is 12.29.43.14 (not true). 我在AWS上的EC2实例公共IP是12.29.43.14 (不是真的)。 When I tried to access it from browser: 当我尝试从浏览器访问它时:

http://12.29.43.14:9090

Can't access the page. 无法访问该页面。 Why? 为什么?


Another issue, after installed prometheus chart, the alertmanager pod didn't run: 另一个问题,安装后prometheus线图上, alertmanager荚没有运行:

ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4       1/2       CrashLoopBackOff   1          9s
ungaged-woodpecker-prometheus-kube-state-metrics-5fd97698cktsj5   1/1       Running            0          9s
ungaged-woodpecker-prometheus-node-exporter-45jtn                 1/1       Running            0          9s
ungaged-woodpecker-prometheus-node-exporter-ztj9w                 1/1       Running            0          9s
ungaged-woodpecker-prometheus-pushgateway-57b67c7575-c868b        0/1       Running            0          9s
ungaged-woodpecker-prometheus-server-7f858db57-w5h2j              1/2       Running            0          9s

Check pod details: 查看广告连播详细信息:

$ kubectl describe po ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4
Name:           ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4
Namespace:      default
Node:           ip-100.200.0.1.ap-northeast-1.compute.internal/100.200.0.1
Start Time:     Fri, 26 Jan 2018 02:45:10 +0000
Labels:         app=prometheus
                component=alertmanager
                pod-template-hash=2959465499
                release=ungaged-woodpecker
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff","uid":"ec...
                kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container prometheus-alertmanager; cpu request for container prometheus-alertmanager-configmap-reload
Status:         Running
IP:             100.96.6.91
Created By:     ReplicaSet/ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff
Controlled By:  ReplicaSet/ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff
Containers:
  prometheus-alertmanager:
    Container ID:  docker://e9fe9d7bd4f78354f2c072d426fa935d955e0d6748c4ab67ebdb84b51b32d720
    Image:         prom/alertmanager:v0.9.1
    Image ID:      docker-pullable://prom/alertmanager@sha256:ed926b227327eecfa61a9703702c9b16fc7fe95b69e22baa656d93cfbe098320
    Port:          9093/TCP
    Args:
      --config.file=/etc/config/alertmanager.yml
      --storage.path=/data
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 26 Jan 2018 02:45:26 +0000
      Finished:     Fri, 26 Jan 2018 02:45:26 +0000
    Ready:          False
    Restart Count:  2
    Requests:
      cpu:        100m
    Readiness:    http-get http://:9093/%23/status delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data from storage-volume (rw)
      /etc/config from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wppzm (ro)
  prometheus-alertmanager-configmap-reload:
    Container ID:  docker://9320a0f157aeee7c3947027667aa6a2e00728d7156520c19daec7f59c1bf6534
    Image:         jimmidyson/configmap-reload:v0.1
    Image ID:      docker-pullable://jimmidyson/configmap-reload@sha256:2d40c2eaa6f435b2511d0cfc5f6c0a681eeb2eaa455a5d5ac25f88ce5139986e
    Port:          <none>
    Args:
      --volume-dir=/etc/config
      --webhook-url=http://localhost:9093/-/reload
    State:          Running
      Started:      Fri, 26 Jan 2018 02:45:11 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /etc/config from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wppzm (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ungaged-woodpecker-prometheus-alertmanager
    Optional:  false
  storage-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  default-token-wppzm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wppzm
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From                                                      Message
  ----     ------                 ----               ----                                                      -------
  Normal   Scheduled              34s                default-scheduler                                         Successfully assigned ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4 to ip-100.200.0.1.ap-northeast-1.compute.internal
  Normal   SuccessfulMountVolume  34s                kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  MountVolume.SetUp succeeded for volume "storage-volume"
  Normal   SuccessfulMountVolume  34s                kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  MountVolume.SetUp succeeded for volume "config-volume"
  Normal   SuccessfulMountVolume  34s                kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  MountVolume.SetUp succeeded for volume "default-token-wppzm"
  Normal   Pulled                 33s                kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Container image "jimmidyson/configmap-reload:v0.1" already present on machine
  Normal   Created                33s                kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Created container
  Normal   Started                33s                kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Started container
  Normal   Pulled                 18s (x3 over 34s)  kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Container image "prom/alertmanager:v0.9.1" already present on machine
  Normal   Created                18s (x3 over 34s)  kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Created container
  Normal   Started                18s (x3 over 33s)  kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Started container
  Warning  BackOff                2s (x4 over 32s)   kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Back-off restarting failed container
  Warning  FailedSync             2s (x4 over 32s)   kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal  Error syncing pod

Not sure why it FailedSync . 不知道为什么它FailedSync

When you do a kubectl port-forward with that command it makes the port available on your localhost. 当您使用该命令执行kubectl port-forward时,它将使该端口在本地主机上可用。 So run the command and then hit http://localhost:9090 . 因此,运行命令,然后点击http:// localhost:9090

You won't be able to directly hit the prometheus ports from the public IP, outside the cluster. 您将无法从群集外部的公共IP直接访问Prometheus端口。 In the longer run you may want expose prometheus at a nice domain name via ingress (which the chart supports), that's how I'd do it. 从长远来看,您可能想通过入口(图表支持)在一个不错的域名下公开普罗米修斯,这就是我要做的。 To use the chart's support for ingress you will need to install an ingress controller in your cluster (like the nginx ingress controller for example), and then enable ingress by setting --set service.ingress.enabled=true and --set server.ingress.hosts[0]=prometheus.yourdomain.com . 要使用图表对入口的支持,您需要在集群中安装一个入口控制器(例如nginx入口控制器),然后通过设置--set service.ingress.enabled=true--set server.ingress.hosts[0]=prometheus.yourdomain.com启用入口--set server.ingress.hosts[0]=prometheus.yourdomain.com Ingress is a fairly large topic in itself, so I'll just refer you to the official docs for that one: Ingress本身就是一个相当大的主题,因此,我只想向您介绍该主题的官方文档:

https://kubernetes.io/docs/concepts/services-networking/ingress/ https://kubernetes.io/docs/concepts/services-networking/ingress/

And here's the nginx ingress controller: 这是nginx入口控制器:

https://github.com/kubernetes/ingress-nginx https://github.com/kubernetes/ingress-nginx

As far as the pod that is showing FailedSync , take a look at the logs using kubectl logs ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4 to see if there's any additional information there. 至于显示FailedSync的窗格,请使用kubectl logs ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4来查看kubectl logs ungaged-woodpecker-prometheus-alertmanager-6f9f8b98ff-qhhw4以查看是否有其他信息。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM