簡體   English   中英

k8s - Cinder“0/x 個節點可用:x 個節點有卷節點關聯沖突”

[英]k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict"

我有自己的集群 k8s。 我正在嘗試將集群鏈接到 openstack/cinder。

當我創建 PVC 時,我可以在 k8s 中看到 PV,在 Openstack 中看到卷。 但是當我將 pod 與 PVC 鏈接時,我收到消息 k8s - Cinder“0/x 個節點可用:x 個節點有卷節點關聯沖突”。

我的yml測試:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: classic
provisioner: kubernetes.io/cinder
parameters:
  type: classic

---


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-infra-consuldata4
  namespace: infra
spec:
  storageClassName: classic
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: consul
  namespace: infra
  labels:
    app: consul
spec:
  replicas: 1
  selector:
    matchLabels:
      app: consul
  template:
    metadata:
      labels:
        app: consul
    spec:
      containers:
      - name: consul
        image: consul:1.4.3
        volumeMounts:
        - name: data
          mountPath: /consul
        resources:
          requests:
            cpu: 100m
          limits:
            cpu: 500m
        command: ["consul", "agent", "-server", "-bootstrap", "-ui", "-bind", "0.0.0.0", "-client", "0.0.0.0", "-data-dir", "/consul"]
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-infra-consuldata4

結果:

kpro describe pvc -n infra
Name:          pvc-infra-consuldata4
Namespace:     infra
StorageClass:  classic
Status:        Bound
Volume:        pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Labels:        
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc-infra-consuldata4","namespace":"infra"},"spec":...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                         Message
  ----       ------                 ----  ----                         -------
  Normal     ProvisioningSucceeded  61s   persistentvolume-controller  Successfully provisioned volume pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c using kubernetes.io/cinder
Mounted By:  consul-85684dd7fc-j84v7
kpro describe po -n infra consul-85684dd7fc-j84v7
Name:               consul-85684dd7fc-j84v7
Namespace:          infra
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=consul
                    pod-template-hash=85684dd7fc
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/consul-85684dd7fc
Containers:
  consul:
    Image:      consul:1.4.3
    Port:       <none>
    Host Port:  <none>
    Command:
      consul
      agent
      -server
      -bootstrap
      -ui
      -bind
      0.0.0.0
      -client
      0.0.0.0
      -data-dir
      /consul
    Limits:
      cpu:  2
    Requests:
      cpu:        500m
    Environment:  <none>
    Mounts:
      /consul from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nxchv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-infra-consuldata4
    ReadOnly:   false
  default-token-nxchv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nxchv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  36s (x6 over 2m40s)  default-scheduler  0/6 nodes are available: 6 node(s) had volume node affinity conflict. 

為什么 K8s 成功創建了 Cinder 卷,卻無法調度 pod?

嘗試找出持久卷的 nodeAffinity:

$ kubctl describe pv pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [xxx]

然后嘗試確定xxx與您的 pod 應該運行的節點標簽yyy匹配:

$ kubectl get nodes
NAME      STATUS   ROLES               AGE   VERSION
yyy       Ready    worker              8d    v1.15.3

如果它們不匹配,則會出現"x node(s) had volume node affinity conflict"錯誤,您需要使用正確的nodeAffinity配置重新創建持久卷。

當我在嘗試讓我的 pod 連接到它之前忘記部署 EBS CSI 驅動程序時,我也遇到了這個問題。

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

您已經設置了provisioner: kubernetes.io/cinder ,它基於 Kubernetes 存儲類文檔- OpenStack Cinder

筆記:

功能狀態: Kubernetes 1.11 已棄用

OpenStack 的這個內部配置器已被棄用。 請使用OpenStack 的外部雲提供商

基於OpenStack GitHub,您應該設置provisioner: openstack.org/standalone-cinder

請查看persistent-volume-provisioning cinder了解詳細使用和yaml文件。

您可能還對閱讀這些 StackOverflow 問題感興趣:

Kubernetes Cinder 卷不使用 cloud-provider=openstack 掛載

如何使用 OpenStack Cinder 在 Kubernetes 集群中創建存儲類並動態提供持久卷

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM