简体   繁体   English

利用集群节点磁盘的 Kubernetes 存储提供程序

[英]Kubernetes storage provider that leverages cluster node disk(s)

I am building a platform on top of Kubernetes that, among other requirements, should:我正在 Kubernetes 之上构建一个平台,除其他要求外,它应该:

  • Be OS agnostic.与操作系统无关。 Any Linux with a sane kernel and cgroup mounts.任何具有健全内核和 cgroup 挂载的 Linux。
  • Offer persistent storage by leveraging cluster node disk(s).通过利用集群节点磁盘提供持久存储。
  • Offer ReadWriteMany volumes or a way to implement shared storage.提供 ReadWriteMany 卷或实现共享存储的方法。
  • PODs shouldn't be bound to a specific node (like for local persistent volumes) POD 不应绑定到特定节点(如本地持久卷)
  • Volumes are automatically reattached when PODs are migrated (eg due to a node drain or node lost condition)迁移 POD 时会自动重新附加卷(例如,由于节点耗尽或节点丢失情况)
  • Offers data replication at storage level在存储级别提供数据复制
  • Not assume a dedicated raw block device available for each node.不假设每个节点都有专用的原始块设备。

I'm addressing the 1st point by using static binaries for k8s components and container engine.我通过对 k8s 组件和容器引擎使用静态二进制文件来解决第一点。 Coupled with minimal host tooling that's also static binaries.再加上最小的主机工具,也是静态二进制文件。

I'm still looking for a solution for persistent storage.我仍在寻找持久存储的解决方案。

What I evaluated/used so far:到目前为止我评估/使用的内容:

So the question is what other option do I have for Kubernetes persistent storage while using the cluster node disks.所以问题是在使用集群节点磁盘时,对于 Kubernetes 持久存储,我还有什么其他选择。

The below options can be considered可以考虑以下选项

  1. kubernetes version 1.14.0 on wards supports local persistent volumes. wards 上的 kubernetes 版本 1.14.0 支持本地持久卷。 You can make use of local pv's using node labels.您可以使用节点标签来使用本地 pv。 You might have to run stateful work loads in HA ( master-slave ) mode so the data would be available in case of node failures您可能必须在 HA(主从)模式下运行有状态的工作负载,以便在节点故障时可以使用数据

  2. You can install nfs server on one of the cluster node and use it as storage for your work loads.您可以在其中一个集群节点上安装 nfs 服务器,并将其用作工作负载的存储。 nfs storage supports ReadWriteMany. nfs 存储支持 ReadWriteMany。 This might work well if you setup the cluster on baremetal如果您在裸机上设置集群,这可能会很好

  3. Rook is also one of the good option which you have already tried but it is not production ready though. Rook 也是您已经尝试过的好选择之一,但它还没有准备好生产。

Among the three, first option suits your requirement.在这三个中,第一个选项适合您的要求。 Would like to hear any other options from the community.想听听社区的任何其他选择。

According to official documentation as of now (v1.16) K8S supports WriteMany on a few different types of volumes.根据截至目前(v1.16)的官方文档,K8S 在几种不同类型的卷上支持 WriteMany。

Namely these are: cephfs , glusterfs and nfs即这些是: cephfsglusterfsnfs

In general, with all of these the content of a volume is preserved and the volume is merely unmounted when a Pod is removed.通常,所有这些都保留了卷的内容,并且仅在删除 Pod 时卸载卷。 This means that a volume can be pre-populated with data, and that data can be “handed off” between Pods.这意味着卷可以预先填充数据,并且可以在 Pod 之间“传递”数据。 These FS can be mounted by multiple writers simultaneously.这些 FS 可以同时被多个 writer 挂载。

Among these FS the glusterfs can be deployed on each kubernetes cluster Node (at least 3 required).在这些 FS 中, glusterfs可以部署在每个 kubernetes 集群节点上(至少需要 3 个)。 Data can be accessed in different ways one of which is NFS.可以通过不同方式访问数据,其中一种是 NFS。

A persistentVolumeClaim volume is used to mount aPersistentVolume into a Pod. persistentVolumeClaim卷用于将PersistentVolume挂载到 Pod 中。 PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment ReadWriteMany is supported with following types of volumes: - AzureFile - CephFS - Glusterfs - Quobyte - NFS - PortworxVolume PersistentVolumes 是用户在不知道特定云环境详细信息的情况下“声明”持久存储(例如 GCE PersistentDisk 或 iSCSI 卷)的一种方式 ReadWriteMany 支持以下类型的卷: - AzureFile - CephFS - Glusterfs - Quobyte - NFS - PortworxVolume

but that's not an option with no control of the underlying infrastructure.但这不是无法控制底层基础架构的选项。

The local volume option represents a mounted local storage device such as a disk, partition or directory. local选项表示已安装的本地存储设备,例如磁盘、分区或目录。 Local volumes can only be used as a statically created PersistentVolume.本地卷只能用作静态创建的 PersistentVolume。 The drawback is that if a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.缺点是如果一个节点变得不健康,那么本地卷也将变得不可访问,并且使用它的 Pod 将无法运行。

So at the moment there is no solution that suits all the requirements out of the box.因此,目前没有适合所有开箱即用要求的解决方案。

You can use OpenEBS Local PV which can consume entire disk for an application using default storage class openebs-device and you can consume the mounted disk for sharing multiple application using default storage class openebs-hostpath .您可以使用 OpenEBS Local PV,它可以使用默认存储类openebs-device为应用程序消耗整个磁盘,您可以使用默认存储类openebs-hostpath使用已安装的磁盘来共享多个应用程序。 More information is provided in OpenEBS documentation under User Guide section.更多信息在User Guide部分下的 OpenEBS 文档中提供。 This does not require open-iscsi.这不需要 open-iscsi。 If you are using a direct device, then using OpenEBS Node Disk Manager, disk will be automatically detected and consumed.如果您使用的是直接设备,那么使用 OpenEBS 节点磁盘管理器,将自动检测和消耗磁盘。 For meeting RWM use case, you can consume this provisioned volume using Local PV as underneath volume for multiple application using NFS provisioner.为了满足 RWM 用例,您可以使用本地 PV 将此预置卷用作使用 NFS 预配器的多个应用程序的底层卷。 The implementation of the same is mentioned under OpenEBS documentation under Stateful Application section. OpenEBS 文档下的Stateful Application部分中提到了相同的实现。

Two and a half years have passed, but it may be of help for those who wind up being here through a Google search.两年半过去了,但它可能对那些通过谷歌搜索来到这里的人有所帮助。
There's a solution provided by OpenEBS to leverage the node disks to create PersistentVolumes named rawfile-localpv; OpenEBS 提供了一个解决方案来利用节点磁盘创建名为 rawfile-localpv 的 PersistentVolume; Install it in your cluster, create a StorageClass like this, and then provision your PersistentVolumeClaims using this StorageClass:将它安装在您的集群中,像这样创建一个 StorageClass,然后使用这个 StorageClass 配置您的 PersistentVolumeClaims:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-sc
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Keep in mind that using this solution, your pods are still bound to a specific node (where the pv resides), and you should do all the migration process by yourself when needed.请记住,使用此解决方案,您的 pod 仍然绑定到特定节点(pv 所在的位置),您应该在需要时自行完成所有迁移过程。 But it provides a neat and easy solution to use high-performance storage inside a Kubernetes cluster.但它为在 Kubernetes 集群中使用高性能存储提供了一种简洁而简单的解决方案。

Link to project on Github: https://github.com/openebs/rawfile-localpv链接到 Github 上的项目: https ://github.com/openebs/rawfile-localpv

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM