简体   繁体   中英

Kubernetes Persistent Volume and hostpath

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.

I configured following Persistent Volume and Persistent Volume Claim.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: store-persistent-volume
  namespace: test
spec:
  storageClassName: hostpath
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: "/Volumes/Data/data"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: store-persistent-volume-claim
  namespace: test
spec:
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

and the following Deployment and Service configuration.

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  name: store-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: store
  template:
    metadata:
      labels:
        k8s-app: store
    spec:
      volumes:
      - name: store-volume
        persistentVolumeClaim:
          claimName: store-persistent-volume-claim
      containers:
      - name: store
        image: localhost:5000/store
        ports:
        - containerPort: 8383
          protocol: TCP
        volumeMounts:
        - name: store-volume
          mountPath: /data

---
#------------ Service ----------------#

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: store
  name: store
  namespace: test
spec:
  type: LoadBalancer
  ports:
  - port: 8383
    targetPort: 8383
  selector:
    k8s-app: store

As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.

So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.

My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.

I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)

Am I understanding the persistent volume concept correctly at all?

PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?

Thx for answers

Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running

You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test

Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume

It does work in my case.

WIth the following commands you can check your persistentVolumes and claims:

kubectl get pv

kubectl get pvc

and see whether the volume you defined are bound with your claims.

Once your pod started you can enter the container and see your data at /data

kubectl exec -ti <your_pod> -- bash

当您使用主机路径时,您应该在运行 pod 的工作节点中检查这个“/data”。

Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM