简体   繁体   中英

Kubernetes storage provider that leverages cluster node disk(s)

I am building a platform on top of Kubernetes that, among other requirements, should:

  • Be OS agnostic. Any Linux with a sane kernel and cgroup mounts.
  • Offer persistent storage by leveraging cluster node disk(s).
  • Offer ReadWriteMany volumes or a way to implement shared storage.
  • PODs shouldn't be bound to a specific node (like for local persistent volumes)
  • Volumes are automatically reattached when PODs are migrated (eg due to a node drain or node lost condition)
  • Offers data replication at storage level
  • Not assume a dedicated raw block device available for each node.

I'm addressing the 1st point by using static binaries for k8s components and container engine. Coupled with minimal host tooling that's also static binaries.

I'm still looking for a solution for persistent storage.

What I evaluated/used so far:

So the question is what other option do I have for Kubernetes persistent storage while using the cluster node disks.

The below options can be considered

  1. kubernetes version 1.14.0 on wards supports local persistent volumes. You can make use of local pv's using node labels. You might have to run stateful work loads in HA ( master-slave ) mode so the data would be available in case of node failures

  2. You can install nfs server on one of the cluster node and use it as storage for your work loads. nfs storage supports ReadWriteMany. This might work well if you setup the cluster on baremetal

  3. Rook is also one of the good option which you have already tried but it is not production ready though.

Among the three, first option suits your requirement. Would like to hear any other options from the community.

According to official documentation as of now (v1.16) K8S supports WriteMany on a few different types of volumes.

Namely these are: cephfs , glusterfs and nfs

In general, with all of these the content of a volume is preserved and the volume is merely unmounted when a Pod is removed. This means that a volume can be pre-populated with data, and that data can be “handed off” between Pods. These FS can be mounted by multiple writers simultaneously.

Among these FS the glusterfs can be deployed on each kubernetes cluster Node (at least 3 required). Data can be accessed in different ways one of which is NFS.

A persistentVolumeClaim volume is used to mount aPersistentVolume into a Pod. PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment ReadWriteMany is supported with following types of volumes: - AzureFile - CephFS - Glusterfs - Quobyte - NFS - PortworxVolume

but that's not an option with no control of the underlying infrastructure.

The local volume option represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. The drawback is that if a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.

So at the moment there is no solution that suits all the requirements out of the box.

You can use OpenEBS Local PV which can consume entire disk for an application using default storage class openebs-device and you can consume the mounted disk for sharing multiple application using default storage class openebs-hostpath . More information is provided in OpenEBS documentation under User Guide section. This does not require open-iscsi. If you are using a direct device, then using OpenEBS Node Disk Manager, disk will be automatically detected and consumed. For meeting RWM use case, you can consume this provisioned volume using Local PV as underneath volume for multiple application using NFS provisioner. The implementation of the same is mentioned under OpenEBS documentation under Stateful Application section.

Two and a half years have passed, but it may be of help for those who wind up being here through a Google search.
There's a solution provided by OpenEBS to leverage the node disks to create PersistentVolumes named rawfile-localpv; Install it in your cluster, create a StorageClass like this, and then provision your PersistentVolumeClaims using this StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-sc
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Keep in mind that using this solution, your pods are still bound to a specific node (where the pv resides), and you should do all the migration process by yourself when needed. But it provides a neat and easy solution to use high-performance storage inside a Kubernetes cluster.

Link to project on Github: https://github.com/openebs/rawfile-localpv

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM