简体   繁体   中英

Kubernetes Minikube with local persistent storage

I am currently trying to deploy the following on Minikube. I used the configuration files to use a hostpath as a persistent storage on minikube node.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: "pv-volume"
spec:
  capacity:
    storage: "20Gi"
  accessModes:
    - "ReadWriteOnce"
  hostPath:
    path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "orientdb-pv-claim"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: orientdbservice 
spec:
  #replicas: 1
  template:
    metadata:
     name: orientdbservice
     labels:
       run: orientdbservice
       test: orientdbservice
    spec:
      containers:
        - name: orientdbservice
          image: orientdb:latest
          env:
           - name: ORIENTDB_ROOT_PASSWORD
             value: "rootpwd"
          ports:
          - containerPort: 2480
            name: orientdb
          volumeMounts:
          - name: orientdb-config
            mountPath: /data/orientdb/config
          - name: orientdb-databases
            mountPath: /data/orientdb/databases 
          - name: orientdb-backup
            mountPath: /data/orientdb/backup
      volumes:
          - name: orientdb-config
            persistentVolumeClaim:
              claimName: orientdb-pv-claim
          - name: orientdb-databases
            persistentVolumeClaim:
              claimName: orientdb-pv-claim
          - name: orientdb-backup
            persistentVolumeClaim:
              claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
  name: orientdbservice
  labels:
    run: orientdbservice
spec:
  type: NodePort
  selector:
    run: orientdbservice
  ports:
   - protocol: TCP
     port: 2480
     name: http

which results in following

#kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                       STORAGECLASS   REASON    AGE
pv-volume                                  20Gi       RWO           Retain          Available                                                        4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5   20Gi       RWO           Delete          Bound       default/orientdb-pv-claim   standard                 4h
#kubectl get pvc
NAME                STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
orientdb-pv-claim   Bound     pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5   20Gi       RWO 
#kubectl get svc
NAME              CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
orientdbservice   10.0.0.16    <nodes>       2480:30552/TCP   4h
#kubectl get pods
NAME                              READY     STATUS              RESTARTS   AGE
orientdbservice-458328598-zsmw5   0/1       ContainerCreating   0          4h
#kubectl describe pod orientdbservice-458328598-zsmw5
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   TypeReason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  4h        1m      37  kubelet, minikube           Warning     FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
  4h        1m      37  kubelet, minikube           Warning     FailedSync  Error syncing pod

I see the following error

Unable to mount volumes for pod,timeout expired waiting for volumes to attach/mount for pod

Is there something incorrect in way I am creating Persistent Volume and PersistentVolumeClaim on my node.

minikube version: v0.20.0

Appreciate all the help

Your configuration is fine.

Tested under minikube v0.24.0 , minikube v0.25.0 and minikube v0.26.1 without any problem.

Take in mind that minikube is under active development , and, specially if you're under windows, is like they say experimental software .

Update to a newer version of minikube and redeploy it. This should solve the problem.

You can check for updates with the minikube update-check command which results in something like this:

$ minikube update-check
CurrentVersion: v0.25.0
LatestVersion: v0.26.1

To upgrade minikube simply type minikube delete which deletes your current minikube installation and download the new release as described.

$ minikube delete
There is a newer version of minikube available (v0.26.1).  Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.26.1

To disable this notification, run the following:
minikube config set WantUpdateNotification false
Deleting local Kubernetes cluster...
Machine deleted.

For somereason the provisioner provisioner: k8s.io/minikube-hostpath in minikube doesn't work.

So:

  • delete default storage class kubectl delete storageclass standard
  • create following storage class:
     apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: docker.io/hostpath reclaimPolicy: Retain 
  • Also in your volume mounts, you have one PVC bound to one PV, so instead of multiple volumes just have one volume and mount them with different subpaths, that will create three subdirectories( backup , config & databases ) on your host's /data directory:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: orientdbservice
spec:
  #replicas: 1
  template:
    metadata:
     name: orientdbservice
     labels:
       run: orientdbservice
       test: orientdbservice
    spec:
      containers:
        - name: orientdbservice
          image: orientdb:latest
          env:
           - name: ORIENTDB_ROOT_PASSWORD
             value: "rootpwd"
          ports:
          - containerPort: 2480
            name: orientdb
          volumeMounts:
          - name: orientdb
            mountPath: /data/orientdb/config
            subPath: config
          - name: orientdb
            mountPath: /data/orientdb/databases
            subPath: databases
          - name: orientdb
            mountPath: /data/orientdb/backup
            subPath: backup
      volumes:
          - name: orientdb
            persistentVolumeClaim:
              claimName: orientdb-pv-claim 
- Now deploy your yaml: kubectl create -f yourorientdb.yaml

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM