简体   繁体   English

错误ICP 3.1.1 Grafana Prometheus Kubernetes状态窗格始终为'Init'

[英]Error ICP 3.1.1 Grafana Prometheus Kubernetes Status Pods Always 'Init'

I Was Complete Installing ICP with VA. 我已经完成了使用VA安装ICP的工作。 Using 1 Master, 1 Proxy, 1 Management, 1 VA, and 3 Workers with GlusterFS Inside. 内部使用GlusterFS,使用1个主控,1个代理,1个管理,1个VA和3个Worker。


This List Kubernetes Pods Not Running 此列表Kubernetes Pod 运行

使用'kubectl'获取吊舱检查

Storage - PersistentVolume GlusterFS on ICP 存储-ICP上的PersistentVolume GlusterFS

在此处输入图片说明

This Describe Kubernetes Pods Error Evenet 这描述了Kubernetes Pod错误Evenet


custom-metrics-adapter 自定义指标适配器

Events:
      Type    Reason     Age   From                     Message
      ----    ------     ----  ----                     -------
      Normal  Scheduled  17m   default-scheduler        Successfully assigned kube-system/custom-metrics-adapter-5d5b694df7-cggz8 to 192.168.10.126
      Normal  Pulled     17m   kubelet, 192.168.10.126  Container image "swgcluster.icp:8500/ibmcom/curl:4.0.0" already present on machine
      Normal  Created    17m   kubelet, 192.168.10.126  Created container
      Normal  Started    17m   kubelet, 192.168.10.126  Started container

monitoring-grafana 监视grafana

Events:
      Type     Reason       Age   From                     Message
      ----     ------       ----  ----                     -------
      Normal   Scheduled    18m   default-scheduler        Successfully assigned kube-system/monitoring-grafana-799d7fcf97-sj64j to 192.168.10.126
      Warning  FailedMount  1m (x8 over 16m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-251f69e3-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
    Mounting command: systemd-run
    Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2c85434-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99/monitoring-grafana-799d7fcf97-sj64j-glusterfs.log,backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119,auto_unmount,log-level=ERROR 192.168.10.115:vol_946f98c8a92ce2930acd3181d803943c /var/lib/kubelet/pods/e2c85434-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99
    Output: Running scope as unit run-r6ba2425d0e7f437d922dbe0830cd5a97.scope.
    mount: unknown filesystem type 'glusterfs'

     the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-grafana-799d7fcf97-sj64j
      Warning  FailedMount  50s (x8 over 16m)  kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-grafana-799d7fcf97-sj64j_kube-system(e2c85434-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-grafana-799d7fcf97-sj64j". list of unmounted volumes=[grafana-storage]. list of unattached volumes=[grafana-storage config-volume dashboard-volume dashboard-config ds-job-config router-config monitoring-ca-certs monitoring-certs router-entry default-token-f6d9q]

monitoring-prometheus 监视普罗米修斯

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    19m   default-scheduler        Successfully assigned kube-system/monitoring-prometheus-85546d8575-jr89h to 192.168.10.126
  Warning  FailedMount  4m (x6 over 17m)    kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-prometheus-85546d8575-jr89h_kube-system(e2ca91a8-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-prometheus-85546d8575-jr89h". list of unmounted volumes=[storage-volume]. list of unattached volumes=[config-volume rules-volume etcd-certs storage-volume router-config monitoring-ca-certs monitoring-certs monitoring-client-certs router-entry lua-scripts-config-config default-token-f6d9q]
  Warning  FailedMount  55s (x11 over 17m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-252001ed-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2ca91a8-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99/monitoring-prometheus-85546d8575-jr89h-glusterfs.log,backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119 192.168.10.115:vol_f101b55d8b1dc3021ec7689713a74e8c /var/lib/kubelet/pods/e2ca91a8-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99
Output: Running scope as unit run-r638272b55bca4869b271e8e4b1ef45cf.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-prometheus-85546d8575-jr89h

monitoring-prometheus-alertmanager 监视普罗米修斯alertmanager

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    20m   default-scheduler        Successfully assigned kube-system/monitoring-prometheus-alertmanager-65445b66bd-6bfpn to 192.168.10.126
  Warning  FailedMount  1m (x9 over 18m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-251ed00f-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2cbe5e7-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119,auto_unmount,log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99/monitoring-prometheus-alertmanager-65445b66bd-6bfpn-glusterfs.log 192.168.10.115:vol_7766e36a77cbd2c0afe3bd18626bd2c4 /var/lib/kubelet/pods/e2cbe5e7-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99
Output: Running scope as unit run-r35994e15064e48e2a36f69a88009aa5d.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-prometheus-alertmanager-65445b66bd-6bfpn
  Warning  FailedMount  23s (x9 over 18m)  kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-prometheus-alertmanager-65445b66bd-6bfpn_kube-system(e2cbe5e7-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-prometheus-alertmanager-65445b66bd-6bfpn". list of unmounted volumes=[storage-volume]. list of unattached volumes=[config-volume storage-volume router-config monitoring-ca-certs monitoring-certs router-entry default-token-f6d9q]

Just Got Resolve Problem this Issues, After Reinstall the ICP (IBM Cloud Private). 重新安装ICP(IBM Cloud Private)之后,刚解决此问题。

And I Checking Few Possibilities Error, then Got on Few Nodes Not Completely Installing the GlusterFS Client. 而且我检查了几项可能的错误,然后在少数未完全安装GlusterFS客户端的节点上发现了错误。

I Checking Commands 'GlusterFS Client on ALL Nodes' : (Using Ubuntu for the OS) 我检查命令“所有节点上的GlusterFS客户端” :(在操作系统上使用Ubuntu)

sudo apt-get install glusterfs-client -y

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 查看grafana中Kubernetes Pod部署状态的方法 - Way to see deployment status of kubernetes pods in grafana Kubernetes Pods状态始终处于待定状态 - Kubernetes Pods status is always pending Grafana 中没有 Kubernetes Pod 和节点的数据点 - Prometheus Dashboard - No data points for Kubernetes Pods and nodes in Grafana - Prometheus Dashboard 在 Kubernetes 上的 Grafana 中设置 Prometheus 数据源时出错 - Error in setting Prometheus datasource in Grafana on Kubernetes 在ICP 3.1.1上安装MCM(Multi Cloud Manager)3.1.1时出错 - Error Installing MCM (Multi Cloud Manager) 3.1.1 on ICP 3.1.1 在kubernetes(gcloud)上设置Prometheus / Grafana - Prometheus / Grafana setup on kubernetes (gcloud) 使用来自 Kubernetes 度量服务器的 PromQL (prometheus) 列出 Grafana 上的命名空间名称、命名空间年龄和状态 - List Namespace name, Namespace Age, and Status on Grafana using PromQL (prometheus) from Kubernetes Metric server Kubernetes Pod总是容器创建 - Kubernetes Pods Always ContainerCreating Kube.netes pod 挂在 Init state - Kubernetes pods hanging in Init state 如何在 Prometheus/Grafana 中配置 Rabbitmq Metric Kubernetes - How to configure Rabbitmq Metric in Prometheus/Grafana Kubernetes
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM