简体   繁体   中英

Creating a link to an NFS share in K3s Kubernetes

I'm very new to Kubernetes, and trying to get node-red running on a small cluster of raspberry pi's I happily managed that, but noticed that once the cluster is powered down, next time I bring it up, the flows in node-red have vanished.

So, I've create a NFS share on a freenas box on my local network and can mount it from another RPI, so I know the permissions work.

However I cannot get my mount to work in a kubernetes deployment.

Any help as to where I have gone wrong please?

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red
  labels:
    app: node-red
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-red
  template:
    metadata:
      labels:
        app: node-red
    spec:
      containers:
      - name: node-red
        image: nodered/node-red:latest
        ports:
        - containerPort: 1880
          name: node-red-ui
        securityContext:
          privileged: true
        volumeMounts:
        - name: node-red-data
          mountPath: /data
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: TZ
          value: Europe/London
      volumes:
         - name: node-red-data
      nfs:
         server: 192.168.1.96
         path: /mnt/Pool1/ClusterStore/nodered

The error I am getting is

error: error validating "node-red-deploy.yml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "nfs" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false

New Information

I now have the following

apiVersion: v1
kind: PersistentVolume
metadata:
  name: clusterstore-nodered
  labels:
    type: nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /mnt/Pool1/ClusterStore/nodered
    server: 192.168.1.96 
  persistentVolumeReclaimPolicy: Recycle

claim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clusterstore-nodered-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Now when I start the deployment it waits at pending forever and I see the following the the events for the PVC

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 5m47s (x7 over 7m3s) persistentvolume-controller waiting for first consumer to be created before binding Normal Provisioning 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 External provisioner is provisioning volume for claim "default/clusterstore-nodered-claim" Warning ProvisioningFailed 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 failed to provision volume with StorageClass "local-path": Only support ReadWriteOnce access mode

Normal ExternalProvisioning 92s (x19 over 5m44s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator

I assume that this is becuase I don't have a nfs provider, in fact if I do kubectl get storageclass I only see local-path

New question, how do I a add a storageclass for NFS? A little googleing around has left me without a clue.

In the stated Tutorial there are basically these steps to fulfill:

1.

showmount -e 192.168.1.XY 

to check if the share is reachable from outside the NAS

2.

helm install nfs-provisioner stable/nfs-client-provisioner --set nfs.server=192.168.1.**XY** --set nfs.path=/samplevolume/k3s --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm

Whereas you replace the IP with your NFS Server and the NFS path with your specific Path on your synology (both should be visible from your showmount -e IP command

Update 23.02.2021 It seems that you have to use another Chart and Image too:

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.1.**XY** --set nfs.path=/samplevolume/k3s --set image.repository=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner
  1. kubectl get storageclass

To check if the storageclass now exists

4.

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' && kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

To configure the new Storage class as "default". Replace nfs-client and local-path with what kubectl get storageclass tells

5.

kubectl get storageclass

Final check, if it's marked as "default"

This is a validation error pointing at the very last part of your Deployment yaml, therefore making it an invalid object. It looks like you've made a mistake with indentations. It should look more like this:

  volumes:
  - name: node-red-data
    nfs:
      server: 192.168.1.96
      path: /mnt/Pool1/ClusterStore/nodered

Also, as you are new to Kubernetes, I strongly recommend getting familiar with the concepts ofPersistentVolumes and its claims. PVs are volume plugins like Volumes , but have a lifecycle independent of any individual Pod that uses the PV.

Please let me know if that helped.

Ok, solved the issue. Kubernetes tutorials are really esoteric and missing lots of assumed steps.

My problem was down to k3s on the pi only shipping with a local-path storage provider.

I finally found a tutorial that installed an nfs client storage provider, and now my cluster works!

This was the tutorial I found the information in.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM