简体   繁体   English

kube.netes 集群管理员如何创建 VolumeSnapshotContents?

[英]How does a kubernetes cluster administrator create VolumeSnapshotContents?

The Kube.netes Volume Snapshots concepts documentation mentions Volume Snapshots can be pre-provisioned; Kube.netes 卷快照概念文档提到可以预先配置卷快照;

A cluster administrator creates a number of VolumeSnapshotContents.集群管理员创建了许多 VolumeSnapshotContents。 They carry the details of the real volume snapshot on the storage system which is available for use by cluster users.它们携带存储系统上真实卷快照的详细信息,可供集群用户使用。 They exist in the Kube.netes API and are available for consumption.它们存在于 Kube.netes API 中,可供消费。

How is this done?这是怎么做到的?

Some background: I'm trying to create k8s Volume Snapshots (VS) from EBS snapshots.一些背景:我正在尝试从 EBS 快照创建 k8s 卷快照 (VS)。 I want to use the VS to restore a mongodb replicaset that is deployed using the Bitnami helm chart.我想使用 VS 恢复使用 Bitnami helm chart 部署的 mongodb 副本集。

I've tried creating the VS without creating a VolumeSnapshotContents using this method:我尝试使用此方法创建 VS 而无需创建 VolumeSnapshotContents:

  1. Create EBS snapshot.创建 EBS 快照。
  2. Create EBS volume from snapshot.从快照创建 EBS 卷。
  3. Create Persistent Volume (PV) from EBS volume.从 EBS 卷创建持久卷 (PV)。
  4. Create Persistent Volume Claim (PVC) to bind to PV.创建持久卷声明 (PVC) 以绑定到 PV。
  5. Bind PVC to PV by creating pod with PVC.通过使用 PVC 创建 pod 将 PVC 绑定到 PV。
  6. Create VolumeSnapshot (VS) from PVC.从 PVC 创建 VolumeSnapshot (VS)。

The last step fails with this error:最后一步失败并出现此错误:

Events:
  Type     Reason                         Age   From                 Message
  ----     ------                         ----  ----                 -------
  Warning  SnapshotContentCreationFailed  21s   snapshot-controller  Failed to create snapshot content with error cannot find CSI PersistentVolumeSource for volume mongo-with-data

This is because the PV created in step 3 has this as its Source:这是因为在步骤 3 中创建的 PV 将其作为其来源:

Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   vol-049483f660a6a66cf
    FSType:
    Partition:  0
    ReadOnly:   false

while a PV created (behind the scenes) through the creation of a pod using a PVC has this Source而通过使用 PVC 创建 pod (在幕后)创建的 PV 具有此来源

Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            ebs.csi.aws.com
    FSType:            ext4
    VolumeHandle:      vol-05b14044113937bee
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1625656807749-8081-ebs.csi.aws.com

Both PVs have the same StorageClass .两个 PV 具有相同的StorageClass

How is this done?这是怎么做到的?

The underlying driver determines that.底层驱动程序决定了这一点。 That driver can be a CSI driver or traditional in tree driver.该驱动程序可以是 CSI 驱动程序或传统的树驱动程序。 Most scenarios that involve two different driver (even if both are CSI but they are different CSI drivers) is not supported as many resources (like VolumeSnapshotContent ) are opaque by nature.大多数涉及两个不同驱动程序的场景(即使两者都是 CSI,但它们是不同的 CSI 驱动程序)不受支持,因为许多资源(如VolumeSnapshotContent )本质上是不透明的。 And that is why step 6 fails.这就是第 6 步失败的原因。

I feel a little bit lost about the whole workflow and I'm not sure how the cluster is set up that both CSI driver and in tree driver are trying to use same storage class... But, you can make CSI typed PV in step 3. Have you followed this sample ?我对整个工作流程感到有点迷茫,我不确定集群是如何设置的,CSI 驱动程序和树驱动程序都试图使用相同的存储 class...但是,您可以逐步制作 CSI 类型的 PV 3. 你关注过这个样本吗?

I also think the whole process can be made easier by directly leveraging EBS volume instead of creating intermediate PVs.我还认为,通过直接利用 EBS 卷而不是创建中间 PV,可以简化整个过程。 I don't have an AWS account to confirm, but it seems AWS provides you EBS snapshot ID and you can reference it directly in VolumeSnapshotContent , based on this sample .我没有要确认的 AWS 帐户,但 AWS 似乎为您提供了 EBS 快照 ID,您可以根据此示例直接在VolumeSnapshotContent中引用它。

It doesn't make sense that both PVs have the same StorageClass since the StorageClass's provisioner parameter determines the source type.两个 PV 具有相同的StorageClass是没有意义的,因为 StorageClass 的 provisioner 参数决定了源类型。

For the following storage class config, the source type is "AWSElasticBlockStore (a Persistent Disk resource in AWS)" since the provisioner: kube.netes.io/aws-ebs对于以下存储 class 配置,源类型是“AWSElasticBlockStore(AWS 中的永久磁盘资源)”,因为provisioner: kube.netes.io/aws-ebs

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  fsType: ext4

And for the following storage class config, the source type is "CSI (a Container Storage Interface (CSI) volume source)" since the provisioner: ebs.csi.aws.com对于以下存储 class 配置,源类型为“CSI(容器存储接口 (CSI) 卷源)”,因为provisioner: ebs.csi.aws.com

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  fsType: ext4

If both of your pvc s were made using the same StorageClass I think that someone updated the provisioner parameter in between making both pvcs, since the change in the StorageClass will not change the already created pvcs in retrospective如果你的两个pvc都是使用相同的StorageClass制作的,我认为有人在制作两个 pvc 之间更新了provisioner参数,因为StorageClass的变化不会改变已经创建的 pvcs 回顾

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM