简体   繁体   English

Kubernetes、Elasticsearch:0/3 个节点可用:3 个 pod 具有未绑定的即时 PersistentVolumeClaims

[英]Kubernetes, Elasticsearch: 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims

I am trying to install Elasticsearch on Kubernetes using bitnami/elasticsearch.我正在尝试使用 bitnami/elasticsearch 在 Kubernetes 上安装 Elasticsearch。 I use the following commands:我使用以下命令:

helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl apply -f ./es-pv.yaml
helm install elasticsearch --set name=elasticsearch,master.replicas=3,data.persistence.size=6Gi,data.replicas=2,coordinating.replicas=1 bitnami/elasticsearch -n elasticsearch

This is what I get, when I check pods:当我检查 pod 时,这就是我得到的:

# kubectl get pods -n elasticsearch
NAME                                READY   STATUS     RESTARTS   AGE
elasticsearch-coordinating-only-0   0/1     Init:0/1   0          18m
elasticsearch-data-0                0/1     Running    6          18m
elasticsearch-data-1                0/1     Init:0/1   0          18m
elasticsearch-master-0              0/1     Init:0/1   0          18m
elasticsearch-master-1              0/1     Running    6          18m
elasticsearch-master-2              0/1     Init:0/1   0          18m

When I try kubectl describe pod for elasticsearch-data and elasticsearch-master pods, they all have the same message:当我尝试kubectl describe pod for elasticsearch -data和 elasticsearch -master pod 时,它们都有相同的消息:

  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

es-pv.yaml describing PersistentVolumes: es-pv.yaml描述 PersistentVolumes:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-master-pv
  labels:
    type: local
spec:
  storageClassName: ''
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: elasticsearch
    name: data-elasticsearch-master-0
  hostPath:
    path: "/usr/share/elasticsearch"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - quiet-violet-vs.icdc.io
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-master-pv-1
  labels:
    type: local
spec:
  storageClassName: ''
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: elasticsearch
    name: data-elasticsearch-master-1
  hostPath:
    path: "/usr/share/elasticsearch"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - shy-fog-vs.icdc.io
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-master-pv-2
  labels:
    type: local
spec:
  storageClassName: ''
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: elasticsearch
    name: data-elasticsearch-master-2
  hostPath:
    path: "/usr/share/elasticsearch"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - quiet-violet-vs.icdc.io
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-data-pv
  labels:
    type: local
spec:
  storageClassName: ''
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: elasticsearch
    name: data-elasticsearch-data-0
  hostPath:
    path: "/usr/share/elasticsearch"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - shy-fog-vs.icdc.io
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-data-pv-1
  labels:
    type: local
spec:
  storageClassName: ''
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: elasticsearch
    name: data-elasticsearch-data-1
  hostPath:
    path: "/usr/share/elasticsearch"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - quiet-violet-vs.icdc.io
root@shy-fog-vs:~/elasticsearch# cat es-values.yaml
resources:
  requests:
    cpu: "200m"
    memory: "512M"
  limits:
    cpu: "1000m"
    memory: "512M"

volumeClaimTemplate:
  storageClassName: local-storage
  accessModes:
  - "ReadWriteOnce"
  resources:
    requests:
      storage: 10Gi
minimumMasterNodes: 1
clusterHealthCheckParams: "wait_for_status=yellow&timeout=2s"
readinessProbe:
   failureThreshold: 3
   initialDelaySeconds: 200
   periodSeconds: 10
   successThreshold: 3
   timeoutSeconds: 5

PersistentVolume and PersistentVolumeClaims seem to be alright: PersistentVolume 和 PersistentVolumeClaims 似乎没问题:

# kubectl get pv
NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
airflow-dags-pv       2Gi        RWX            Retain           Bound    airflow/airflow-dags-pvc                    manual                  112d
airflow-logs-pv       2Gi        RWX            Retain           Bound    airflow/airflow-logs-pvc                    manual                  112d
airflow-pv            2Gi        RWX            Retain           Bound    airflow/airflow-pvc                         manual                  112d
elastic-data-pv       10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-data-0                             15m
elastic-data-pv-1     10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-data-1                             15m
elastic-master-pv     10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-master-0                           15m
elastic-master-pv-1   10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-master-1                           15m
elastic-master-pv-2   10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-master-2                           15m
# kubectl get pvc -n elasticsearch
NAME                          STATUS   VOLUME                CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-elasticsearch-data-0     Bound    elastic-data-pv       10Gi       RWO                           16m
data-elasticsearch-data-1     Bound    elastic-data-pv-1     10Gi       RWO                           16m
data-elasticsearch-master-0   Bound    elastic-master-pv     10Gi       RWO                           16m
data-elasticsearch-master-1   Bound    elastic-master-pv-1   10Gi       RWO                           16m
data-elasticsearch-master-2   Bound    elastic-master-pv-2   10Gi       RWO                           16m

Nodes:节点:

# kubectl get nodes
NAME                        STATUS     ROLES                  AGE    VERSION
dark-butterfly-vs.icdc.io   NotReady   <none>                 253d   v1.20.4
quiet-violet-vs.icdc.io     Ready      <none>                 253d   v1.20.4
shy-fog-vs.icdc.io          Ready      control-plane,master   253d   v1.21.1

Short answer: everything is fine简短的回答:一切都很好

Longer answer (and why you got that error):更长的答案(以及为什么会出现该错误):

This is what I get, when I check pods:当我检查 pod 时,这就是我得到的:

 # kubectl get pods -n elasticsearch NAME READY STATUS RESTARTS AGE elasticsearch-coordinating-only-0 0/1 Init:0/1 0 18m elasticsearch-data-0 0/1 Running 6 18m elasticsearch-data-1 0/1 Init:0/1 0 18m elasticsearch-master-0 0/1 Init:0/1 0 18m elasticsearch-master-1 0/1 Running 6 18m elasticsearch-master-2 0/1 Init:0/1 0 18m

This actually indicates the volumes mounted and the pod has started (see the second master pod is running and the other two are are in "Init" stage)这实际上表明已安装卷并且 pod 已启动(请参阅第二个 master pod 正在运行,其他两个处于“Init”阶段)

When I try kubectl describe pod for elasticsearch-data and elasticsearch-master pods, they all have the same message:当我尝试kubectl describe pod for elasticsearch -data和 elasticsearch -master pod 时,它们都有相同的消息:

0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

This is actually expected the first time you start the chart.这实际上是您第一次启动图表时的预期 Kubernetes has detected you don't have the volumes, and goes off to provision them for you. Kubernetes 检测到您没有卷,并开始为您配置它们。 During that time, the pods can't start as those disks haven't been provisioned (and therefore the PersistentVolumeClaims have not been bound -- hence the error.)在此期间,Pod 无法启动,因为尚未配置这些磁盘(因此PersistentVolumeClaims尚未绑定 - 因此出现错误。)

You should also be able to see from the events section in the kubectl describe how recently that message appeared and frequently it has appeared.您还应该能够从kubectl describe的事件部分看到该消息的出现时间和频率。 It should read something like below:它应该如下所示:

Events:
  Type     Reason   Age                    From     Message
  ----     ------   ----                   ----     -------
  Normal   Pulling  51m (x112 over 10h)    kubelet  Pulling image "broken-image:latest"

So here, the "broken-image" image has been pulled 112 times over the past 10 hours, and that message is 51 minutes old所以在这里,“损坏的图像”图像在过去 10 小时内被拉取了 112 次,该消息是 51 分钟前的

Once the disks have been provisioned, and the PersistentVolumeClaims have been bound (the disks have been allocated to your claim), your pods can start.配置磁盘并绑定PersistentVolumeClaims后(磁盘已分配给您的声明),您的 pod 就可以启动了。 You can also confirm this by your other referenced snippet:您还可以通过其他引用的片段来确认这一点:

# kubectl get pv
NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
airflow-dags-pv       2Gi        RWX            Retain           Bound    airflow/airflow-dags-pvc                    manual                  112d
airflow-logs-pv       2Gi        RWX            Retain           Bound    airflow/airflow-logs-pvc                    manual                  112d
airflow-pv            2Gi        RWX            Retain           Bound    airflow/airflow-pvc                         manual                  112d
elastic-data-pv       10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-data-0                             15m
elastic-data-pv-1     10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-data-1                             15m
elastic-master-pv     10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-master-0                           15m
elastic-master-pv-1   10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-master-1                           15m
elastic-master-pv-2   10Gi       RWO            Retain           Bound    elasticsearch/data-elasticsearch-master-2                           15m

You can see from this that the pv (Persistent Volume) has been bound to the claim and that is why your pods have started.您可以从中看到 pv(持久卷)已绑定到声明,这就是您的 pod 已启动的原因。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 pod 有未绑定的即时 PersistentVolumeClaims ECK(Kubernetes 上的 Elasticsearch) - pod has unbound immediate PersistentVolumeClaims ECK (Elasticsearch on Kubernetes) ELK Stateful - 错误:0/1 个节点可用:1 个 pod 具有未绑定的即时 PersistentVolumeClaims - ELK Stateful - ERROR: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims Kubernetes 上的 elasticsearch - 发现节点 - elasticsearch on kubernetes - discovery of nodes Kube.netes 上的 Elasticsearch 集群 - 节点不通信 - Elasticsearch cluster on Kubernetes - nodes are not communicating Kubernetes 中的 Elasticsearch 高可用设置 - Elasticsearch Highly Available Setup in Kubernetes Kubernetes Elasticsearch centos - 无法在 pod 中启动 nginx - Kubernetes Elasticsearch centos - unable to start nginx in the pod 在Kubernetes集群中运行的Elasticsearch Pod的基本身份验证 - Basic authentication for elasticsearch pod running in kubernetes cluster Kubernetes ElasticSearch Helm Pod 没有变绿 - Kubernetes ElasticSearch Helm Pod not going green 如何在kubernetes中使用elasticsearch节点附加存储卷? - How to attach storage volume with elasticsearch nodes in kubernetes? NoNodeAvailableException [没有已配置的节点可用]在Elasticsearch 5.1.2中 - NoNodeAvailableException[None of the configured nodes are available] in Elasticsearch 5.1.2
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM