简体   繁体   English

Kube.netes Persistent Volume Claim indefinitely in Pending State

[英]Kubernetes Persistent Volume Claim Indefinitely in Pending State

I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data.我创建了一个 PersistentVolume,它源自我已经格式化并配置了数据的 Google Compute Engine 永久磁盘。 Kube.netes says the PersistentVolume is available. Kube.netes 说 PersistentVolume 可用。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: models-1-0-0
  labels:
    name: models-1-0-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: models-1-0-0
    fsType: ext4
    readOnly: true

I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes.然后我创建了一个 PersistentVolumeClaim,这样我就可以将这个卷连接到跨多个节点的多个 pod。 However, kube.netes indefinitely says it is in a pending state.然而,kube.netes 无限期地表示它处于待处理的 state 中。

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: models-1-0-0-claim
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 200Gi
  selector:
    matchLabels:
      name: models-1-0-0

Any insights?有什么见解吗? I feel there may be something wrong with the selector...我觉得选择器可能有问题......

Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?是否有可能预先配置一个包含数据的永久性磁盘,并让跨多个节点的 pod 都能够从中读取数据?

I quickly realized that PersistentVolumeClaim defaults the storageClassName field to standard when not specified.我很快意识到 PersistentVolumeClaim 在未指定时将storageClassName字段默认为standard However, when creating a PersistentVolume, storageClassName does not have a default, so the selector doesn't find a match.但是,在创建 PersistentVolume 时, storageClassName没有默认值,因此选择器找不到匹配项。

The following worked for me:以下对我有用:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: models-1-0-0
  labels:
    name: models-1-0-0
spec:
  capacity:
    storage: 200Gi
  storageClassName: standard
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: models-1-0-0
    fsType: ext4
    readOnly: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: models-1-0-0-claim
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 200Gi
  selector:
    matchLabels:
      name: models-1-0-0

With dynamic provisioning, you shouldn't have to create PVs and PVCs separately.使用动态配置,您不必分别创建 PV 和 PVC。 In Kubernetes 1.6+, there are default provisioners for GKE and some other cloud environments, which should let you just create a PVC and have it automatically provision a PV and an underlying Persistent Disk for you.在 Kubernetes 1.6+ 中,有 GKE 和其他一些云环境的默认配置器,它应该让您只需创建一个 PVC 并让它自动为您配置一个 PV 和一个底层 Persistent Disk。

For more on dynamic provisioning, see:有关动态配置的更多信息,请参阅:

https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/ https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/

If you're using Microk8s, you have to enable storage before you can start a PersistentVolumeClaim successfully.如果您使用的是 Microk8s,则必须先启用存储,然后才能成功启动 PersistentVolumeClaim。

Just do:只需这样做:

microk8s.enable storage

You'll need to delete your deployment and start again.您需要删除部署并重新开始。

You may also need to manually delete the "pending" PersistentVolumeClaims because I found that uninstalling the Helm chart which created them didn't clear the PVCs out.您可能还需要手动删除“待处理”的 PersistentVolumeClaims,因为我发现卸载创建它们的 Helm 图表并没有清除 PVC。

You can do this by first finding a list of names:您可以通过首先查找名称列表来执行此操作:

kubectl get pvc --all-namespaces

then deleting each name with:然后删除每个名称:

kubectl delete pvc name1 name2 etc...

Once storage is enabled, reapplying your deployment should get things going.启用存储后,重新应用您的部署应该会让事情顺利进行。

Had the same issue but it was another reason that's why I am sharing it here to help community.有同样的问题,但这是另一个原因,这就是我在这里分享它以帮助社区的原因。

If you have deleted PersistentVolumeClaim and then re-create it again with the same definition, it will be Pending forever, why?如果您删除了PersistentVolumeClaim然后使用相同的定义重新创建它,它将永远挂起,为什么?

persistentVolumeReclaimPolicy is Retain by default in PersistentVolume .PersistentVolume persistentVolumeReclaimPolicy默认为Retain In case we have deleted PersistentVolumeClaim , the PersistentVolume still exists and the volume is considered released.如果我们删除了PersistentVolumeClaim ,则PersistentVolume仍然存在并且该卷被视为已释放。 But it is not yet available for another claim because the previous claimant's data remains on the volume.但它尚不可用于其他索赔,因为前一个索赔人的数据仍保留在卷上。 so you need to manually reclaim the volume with the following steps:因此您需要通过以下步骤手动回收卷:

  1. Delete the PersistentVolume (associated underlying storage asset/resource like EBS, GCE PD, Azure Disk, ...etc will NOT be deleted, still exists)删除 PersistentVolume(相关的底层存储资产/资源,如 EBS、GCE PD、Azure 磁盘等将不会被删除,仍然存在)

  2. (Optional) Manually clean up the data on the associated storage asset accordingly (可选)相应地手动清理关联存储资产上的数据

  3. (Optional) Manually delete the associated storage asset (EBS, GCE PD, Azure Disk, ...etc) (可选)手动删除关联的存储资产(EBS、GCE PD、Azure 磁盘等)

If you still need the same data, you may skip cleaning and deleting associated storage asset (step 2 and 3 above), so just simply re-create a new PersistentVolume with same storage asset definition then you should be good to create PersistentVolumeClaim again.如果您仍然需要相同的数据,您可以跳过清理和删除关联的存储资产(上面的第 2 步和第 3 步),因此只需简单地重新创建一个具有相同存储资产定义的新PersistentVolume ,那么您应该可以再次创建PersistentVolumeClaim

One last thing to mention, Retain is not the only option for persistentVolumeReclaimPolicy , below are some other options that you may need to use or try based on use-case scenarios:最后要提到的一件事, Retain不是persistentVolumeReclaimPolicy的唯一选项,以下是您可能需要根据用例场景使用或尝试的一些其他选项:

Recycle : performs a basic scrub on the volume (eg, rm -rf //* ) - makes it available again for a new claim.回收:对卷执行基本清理(例如, rm -rf //* ) - 使其再次可用于新的声明。 Only NFS and HostPath support recycling.只有NFSHostPath支持回收。

Delete : Associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder...etc volume is deleted删除删除关联的存储资产,例如AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder...etc

For more information, please check kubernetes documentation .有关更多信息,请查看kubernetes 文档

Still need more clarification or have any questions, please don't hesitate to leave a comment and I will be more than happy to clarify and assist.仍然需要更多说明或有任何疑问,请随时发表评论,我将非常乐意澄清和提供帮助。

I was facing the same problem, and realise that k8s actually does a just-in-time provision, ie我遇到了同样的问题,并意识到 k8s 实际上做了一个即时供应,即

  • When a pvc is created, it stays in PENDING state, and no corresponding pv is created.当一个 pvc 被创建时,它一直处于 PENDING 状态,并且没有相应的 pv 被创建。
  • The pvc & pv (EBS volume) are created only after you have created a deployment which uses the pvc. pvc & pv(EBS 卷)仅在您创建使用 pvc 的部署后创建。

I am using EKS with kubernetes version 1.16 and the behaviour is controlled by StorageClass Volume Binding Mode .我将 EKS 与 kubernetes 1.16版一起使用,行为由StorageClass Volume Binding Mode 控制

I've seen this behaviour in microk8s 1.14.1 when two PersistentVolume s have the same value for spec/hostPath/path , eg当两个PersistentVolumespec/hostPath/path值相同时,我在microk8s 1.14.1 中看到了这种行为,例如

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-name
  labels:
    type: local
    app: app
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/k8s-app-data"

It seems that microk8s is event-based (which isn't necessary on a one-node cluster) and throws away information about any failing operations resulting in unnecessary horrible feedback for almost all failures. microk8s 似乎是基于事件的(这在单节点集群上不是必需的)并且会丢弃有关任何失败操作的信息,从而导致几乎所有失败的不必要的可怕反馈。

I had this problem with helmchart of the apache airflow(stable), setting storageClass to azurefile helped.我在 apache 气流(稳定)的 helmchart 上遇到了这个问题,将 storageClass 设置为 azurefile 有帮助。 What you should do in such cases with the cloud providers?在这种情况下,您应该对云提供商做什么? Just search for the storage classes that support the needed access mode.只需搜索支持所需访问模式的存储类。 ReadWriteMany means that SIMULTANEOUSLY many processes will read and write to the storage. ReadWriteMany 意味着同时许多进程将读取和写入存储。 In this case(azure) it is azurefile.在这种情况下(azure)它是 azurefile。

path: /opt/airflow/logs

  ## configs for the logs PVC
  ##
  persistence:
    ## if a persistent volume is mounted at `logs.path`
    ##
    enabled: true

    ## the name of an existing PVC to use
    ##
    existingClaim: ""

    ## sub-path under `logs.persistence.existingClaim` to use
    ##
    subPath: ""

    ## the name of the StorageClass used by the PVC
    ##
    ## NOTE:
    ## - if set to "", then `PersistentVolumeClaim/spec.storageClassName` is omitted
    ## - if set to "-", then `PersistentVolumeClaim/spec.storageClassName` is set to ""
    ##
    storageClass: "azurefile"

    ## the access mode of the PVC
    ##
    ## WARNING:
    ## - must be: `ReadWriteMany`
    ##
    ## NOTE:
    ## - different StorageClass support different access modes:
    ##   https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
    ##
    accessMode: ReadWriteMany

    ## the size of PVC to request
    ##
    size: 1Gi

Am using microk8s我正在使用 microk8s

Fixed the problem by running the commands below通过运行以下命令修复了问题

systemctl start open-iscsi.service

When you want to bind manually a PVC to a PV with an existing disk, the storageClassName<\/code> should not be specified... but... the cloud provider has set by default the "standard" StorageClass making it always entered whatever you try when patching the PVC\/PV.当您想手动将 PVC 绑定到具有现有磁盘的 PV 时,不应指定storageClassName<\/code> ...但是...云提供商默认设置了“标准” StorageClass,使其始终输入您在修补时尝试的任何内容聚氯乙烯\/光伏。

You can check your provider set it as default when doing kubectl get storageclass<\/code> (it will be written "(default")).您可以在执行kubectl get storageclass<\/code>时检查您的提供程序是否将其设置为默认值(它将被写为“(默认”))。

To fix this the best is to get your existing StorageClass YAML and add this annotation:要解决此问题,最好的方法是获取现有的 StorageClass YAML 并添加此注释:

  annotations:
    storageclass.kubernetes.io/is-default-class: "false"

I had same problem.我有同样的问题。 My PersistentVolumeClaim yaml was originally as follows:我的PersistentVolumeClaim yaml原来是这样的:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
spec:
  storageClassName: “”
  accessModes:
    – ReadWriteOnce 
  volumeName: pv
  resources:
    requests:
      storage: 1Gi

and my pvc status was:我的 PVC 状态是:

在此处输入图像描述

after remove volumeName :删除volumeName后:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
spec:
  storageClassName: “”
  accessModes:
    – ReadWriteOnce 
  resources:
    requests:
      storage: 1Gi

在此处输入图像描述

I faced the same issue in which the PersistentVolumeClaim was in Pending Phase indefinitely, I tried providing the storageClassName as 'default' in PersistentVolume just like I did for PersistentVolumeClaim but it did not fix this issue.我遇到了同样的问题,其中 PersistentVolumeClaim 无限期地处于待定阶段,我尝试在 PersistentVolume 中提供 storageClassName 作为“默认”,就像我为 PersistentVolumeClaim 所做的那样,但它没有解决这个问题。

I made one change in my persistentvolume.yml and moved the PersistentVolumeClaim config on top of the file and then PersistentVolume as the second config in the yml file.我在我的persistentvolume.yml 中做了一个更改并将PersistentVolumeClaim 配置移到文件的顶部,然后将PersistentVolume 作为yml 文件中的第二个配置。 It has fixed that issue.它已经解决了这个问题。

We need to make sure that PersistentVolumeClaim is created first and the PersistentVolume is created afterwards to resolve this 'Pending' phase issue.我们需要确保首先创建 PersistentVolumeClaim,然后创建 PersistentVolume 以解决此“待处理”阶段问题。

I am posting this answer after testing it for a few times, hoping that it might help someone struggling with it.我在测试了几次后发布了这个答案,希望它可以帮助那些与之抗争的人。

确保您的 VM 也有足够的磁盘空间。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM