简体   繁体   English

存储 class 是否为每个 pod 动态配置持久卷?

[英]Does the storage class dynamically provision persistent volume per pod?

Kubernetes newbie here, so my question might not make sense. Kubernetes 新手在这里,所以我的问题可能没有意义。 Please bear with me.请多多包涵。

So my question is, given I have setup Storage Class in my cluster, then I have a PVC (Which uses that Storage Class).所以我的问题是,假设我已经在集群中设置了存储 Class,那么我有一个 PVC(它使用该存储类)。 If I use that PVC into my Deployment, and that Deployment have 5 replicas, will the Storage Class create 5 PV?如果我在部署中使用该 PVC,并且该部署有 5 个副本,存储 Class 是否会创建 5 个 PV? one per Pod?每个 Pod 一个? Or only 1 PV shared by all Pods under that Deployment?还是该部署下的所有 Pod 仅共享 1 个 PV?

Edit: Also I have 3 Nodes in this cluster编辑:我在这个集群中有 3 个节点

Thanks in advance.提前致谢。

The Persistent Volume Claim resource is specified separately from a deployment. Persistent Volume Claim资源与部署分开指定。 It doesn't matter how many replicas the deployment has, kubernetes will only have the number of PVC resources that you define.不管部署有多少副本,kubernetes 将只有您定义的 PVC 资源数量。

If you are looking for multiple stateful containers that create their own PVC's, use a StatefulSet instead.如果您正在寻找创建自己的 PVC 的多个有状态容器,请改用StatefulSet This includes a VolumeClaimTemplate definition.这包括VolumeClaimTemplate定义。

If you want all deployment replicas to share a PVC, the storage class provider plugin will need to be either ReadOnlyMany or ReadWriteMany如果您希望所有部署副本共享一个 PVC,存储 class 提供程序插件将需要是ReadOnlyMany 或 ReadWriteMany

To answer my question directly.直接回答我的问题。

The Storage Class in this case will only provision one PV and is shared across all pods under the Deployment which uses that PVC.在这种情况下,存储 Class 将仅提供一个 PV,并在使用该 PVC 的 Deployment 下的所有 Pod 之间共享。

The accessModes of the PVC does not dictate whether to create one PV for each pod. PVC 的 accessModes 并不规定是否为每个 pod 创建一个 PV。 You can set the accessModes to either ReadWriteOnce/ReadOnlyMany/ReadWriteMany and it will always create 1 PV.您可以将accessModes设置为 ReadWriteOnce/ReadOnlyMany/ReadWriteMany,它将始终创建 1 个 PV。

If you want that each Pod will have its own PV, you can not do that under a Deployment如果你希望每个 Pod 都有自己的 PV,你不能在 Deployment 下这样做

You will need to use StatefulSet using volumeClaimTemplates .您将需要使用StatefulSet使用volumeClaimTemplates

It is Important that the StatefulSet uses volumeClaimTemplates or else, it will still act the same as the Deployment, that is the Storage Class will just provision one PV that is shared across all pods under that StatefulSet. StatefulSet 使用 volumeClaimTemplates 很重要,否则,它仍将与 Deployment 一样,即 Storage Class 将仅配置一个 PV,该 PV 在该 StatefulSet 下的所有 pod 之间共享。

References:参考:

Kubernetes Deployments vs StatefulSets Kubernetes 部署与 StatefulSets

Is there a way to create a persistent volume per pod in a kubernetes deployment (or statefulset)? 有没有办法在 kubernetes 部署(或 statefulset)中为每个 pod 创建一个持久卷?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 OpenStack Cinder 在 Kubernetes 集群中创建存储类并动态提供持久卷 - How to use OpenStack Cinder to create storage class and dynamically provision persistent volume in Kubernetes Cluster EKS 中的持久存储无法配置卷 - Persistent Storage in EKS failing to provision volume Kubernetes 每个节点一个 Pod,每个 Pod 一个持久卷 - Kubernetes one pod per node, one persistent volume per pod 有没有一种方法可以在kubernetes部署(或有状态集)中为每个Pod创建持久卷? - Is there a way to create a persistent volume per pod in a kubernetes deployment (or statefulset)? 如何在Pod中安装持久卷? - How to mount a persistent volume in a pod? Kubernetes 生产中 Pod 的持久卷 - Kubernetes Persistent Volume for Pod in Production 更新 K8 存储 class、持久卷和更新 K8 机密时的持久卷声明 - Update K8 storage class, persistent volume, and persistent volume claim when K8 secret is updated 无法在本地存储 class 上设置具有持久卷的 couchbase operator 1.2 - Unable to setup couchbase operator 1.2 with persistent volume on local storage class 无法使用 Kubernetes 中的默认存储 class 在 AWS 上创建持久卷 - Unable to create persistent volume on AWS using the default storage class in Kubernetes 更改 Kube.netes Persistent Volume 的 Storage Class 并保留数据 - Change Kubernetes Persistent Volume's Storage Class and keep the data
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM