简体   繁体   English

Google Kubernetes Engine:未在实例中看到挂载持久卷

[英]Google Kubernetes Engine: Not seeing mount persistent volume in the instance

I created a 200G disk with the command gcloud compute disks create --size 200GB my-disk 我创建了一个200G磁盘,命令为gcloud compute disks create --size 200GB my-disk

then created a PersistentVolume 然后创建了一个PersistentVolume

apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-volume
    spec:
      capacity:
        storage: 200Gi
      accessModes:
        - ReadWriteOnce
      gcePersistentDisk:
        pdName: my-disk
        fsType: ext4

then created a PersistentVolumeClaim 然后创建了PersistentVolumeClaim

apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-claim
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 200Gi

then created a StatefulSet and mount the volume to /mnt/disks, which is an existing directory. 然后创建一个StatefulSet并将卷挂载到/ mnt / disks,这是一个现有目录。 statefulset.yaml: statefulset.yaml:

apiVersion: apps/v1beta2
    kind: StatefulSet
    metadata:
      name: ...
    spec:
        ...
        spec:
          containers:
          - name: ...
            ...
            volumeMounts:
            - name: my-volume
              mountPath: /mnt/disks
          volumes:
          - name: my-volume
            emptyDir: {}
      volumeClaimTemplates:
      - metadata:
          name: my-claim
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 200Gi

I ran command kubectl get pv and saw that disk was successfully mounted to each instance 我运行命令kubectl get pv并看到该磁盘已成功安装到每个实例

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                    STORAGECLASS   REASON    AGE
    my-volume                                  200Gi      RWO            Retain           Available                                                                     19m
    pvc-17c60f45-2e4f-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim-xxx_1   standard                 13m
    pvc-5972c804-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim                         standard                 18m
    pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claimxxx_0   standard                 18m

but when I ssh into an instance and run df -hT , I do not see the mounted volume. 但是当我进入一个实例并运行df -hT ,我看不到已安装的卷。 below is the output: 以下是输出:

Filesystem     Type      Size  Used Avail Use% Mounted on
    /dev/root      ext2      1.2G  447M  774M  37% /
    devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
    tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm
    tmpfs          tmpfs     1.9G  744K  1.9G   1% /run
    tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
    tmpfs          tmpfs     1.9G     0  1.9G   0% /tmp
    tmpfs          tmpfs     256K     0  256K   0% /mnt/disks
    /dev/sda8      ext4       12M   28K   12M   1% /usr/share/oem
    /dev/sda1      ext4       95G  3.5G   91G   4% /mnt/stateful_partition
    tmpfs          tmpfs     1.0M  128K  896K  13% /var/lib/cloud
    overlayfs      overlay   1.0M  148K  876K  15% /etc

anyone has any idea? 有谁有任何想法?

Also worth mentioning that I'm trying to mount the disk to a docker image which is running in kubernete engine. 另外值得一提的是,我正在尝试将磁盘挂载到在kubernete引擎中运行的docker映像。 The pod was created with below commands: 该pod使用以下命令创建:

docker build -t gcr.io/xxx .
    gcloud docker -- push gcr.io/xxx
    kubectl create -f statefulset.yaml

The instance I sshed into is the one that runs the docker image. 我遇到的实例是运行docker镜像的实例。 I do not see the volume in both instance and the docker container 我没有在实例和docker容器中看到卷

UPDATE I found the volume, I ran df -ahT in the instance, and saw the relevant entries 更新我找到了卷,我在实例中运行了df -ahT ,并看到了相关的条目

/dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e

then I went into the docker container and ran df -ahT , I got 然后我进入了码头工人的容器并运行了df -ahT ,我得到了

Filesystem     Type     Size  Used Avail Use% Mounted on
    /dev/sda1      ext4      95G  3.5G   91G   4% /mnt/disks

Why I'm seeing 95G total size instead of 200G, which is the size of my volume? 为什么我看到95G总尺寸而不是200G,这是我的音量大小?

More info: kubectl describe pod 更多信息: kubectl describe pod

Name:           xxx-replicaset-0
    Namespace:      default
    Node:           gke-xxx-cluster-default-pool-5e49501c-nrzt/10.128.0.17
    Start Time:     Fri, 23 Mar 2018 11:40:57 -0400
    Labels:         app=xxx-replicaset
                    controller-revision-hash=xxx-replicaset-755c4f7cff
    Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"xxx-replicaset","uid":"d6c3511f-2eaf-11e8-b14e-42010af0000...
                    kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container xxx-deployment
    Status:         Running
    IP:             10.52.4.5
    Created By:     StatefulSet/xxx-replicaset
    Controlled By:  StatefulSet/xxx-replicaset
    Containers:
      xxx-deployment:
        Container ID:   docker://137b3966a14538233ed394a3d0d1501027966b972d8ad821951f53d9eb908615
        Image:          gcr.io/sampeproject/xxxstaging:v1
        Image ID:       docker-pullable://gcr.io/sampeproject/xxxstaging@sha256:a96835c2597cfae3670a609a69196c6cd3d9cc9f2f0edf5b67d0a4afdd772e0b
        Port:           8080/TCP
        State:          Running
          Started:      Fri, 23 Mar 2018 11:42:17 -0400
        Ready:          True
        Restart Count:  0
        Requests:
          cpu:        100m
        Environment:  
        Mounts:
          /mnt/disks from my-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-hj65g (ro)
    Conditions:
      Type           Status
      Initialized    True
      Ready          True
      PodScheduled   True
    Volumes:
      my-claim:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  my-claim-xxx-replicaset-0
        ReadOnly:   false
      my-volume:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      default-token-hj65g:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-hj65g
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  
    Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                     node.alpha.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason                 Age                From                                                      Message
      ----     ------                 ----               ----                                                      -------
      Warning  FailedScheduling       10m (x4 over 10m)  default-scheduler                                         PersistentVolumeClaim is not bound: "my-claim-xxx-replicaset-0" (repeated 5 times)
      Normal   Scheduled              9m                 default-scheduler                                         Successfully assigned xxx-replicaset-0 to gke-xxx-cluster-default-pool-5e49501c-nrzt
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "my-volume"
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "default-token-hj65g"
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "pvc-902c57c5-2eb0-11e8-b14e-42010af0000e"
      Normal   Pulling                9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  pulling image "gcr.io/sampeproject/xxxstaging:v1"
      Normal   Pulled                 8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Successfully pulled image "gcr.io/sampeproject/xxxstaging:v1"
      Normal   Created                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Created container
      Normal   Started                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Started container

Seems like it did not mount the correct volume. 好像它没有安装正确的音量。 I ran lsblk in docker container 我在lsblk容器中运行了lsblk

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

    sda       8:0    0  100G  0 disk 
    ├─sda1    8:1    0 95.9G  0 part /mnt/disks
    ├─sda2    8:2    0   16M  0 part 
    ├─sda3    8:3    0    2G  0 part 
    ├─sda4    8:4    0   16M  0 part 
    ├─sda5    8:5    0    2G  0 part 
    ├─sda6    8:6    0  512B  0 part 
    ├─sda7    8:7    0  512B  0 part 
    ├─sda8    8:8    0   16M  0 part 
    ├─sda9    8:9    0  512B  0 part 
    ├─sda10   8:10   0  512B  0 part 
    ├─sda11   8:11   0    8M  0 part 
    └─sda12   8:12   0   32M  0 part 
    sdb       8:16   0  200G  0 disk

Why this is happening? 为什么会这样?

When you use PVCs, K8s manages persistent disks for you. 使用PVC时,K8会为您管理永久磁盘。

The exact way how PVs can by defined by provisioner in storage classes. PVs可以通过存储类中的供应商定义的确切方式。 Since you use GKE your default SC uses kubernetes.io/gce-pd provisioner ( https://kubernetes.io/docs/concepts/storage/storage-classes/#gce ). 由于您使用GKE,因此默认SC使用kubernetes.io/gce-pd provisioner( https://kubernetes.io/docs/concepts/storage/storage-classes/#gce )。

In other words for each pod new PV is created. 换句话说,对于每个pod,创建新的PV。

If you would like to use existing disk you can use Volumes instead of PVCs ( https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk ) 如果您想使用现有磁盘,可以使用Volumes而不是PVC( https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk

The PVC is not mounted into your container because you did not actually specify the PVC in your container's volumeMounts. PVC未安装到您的容器中,因为您实际上没有在容器的volumeMounts中指定PVC。 Only the emptyDir volume was specified. 仅指定了emptyDir卷。

I actually recently modified the GKE StatefulSet tutorial . 我实际上最近修改了GKE StatefulSet 教程 Before, some of the steps were incorrect and saying to manually create the PD and PV objects. 之前,有些步骤不正确,并说要手动创建PD和PV对象。 Instead, it's been corrected to use dynamic provisioning. 相反,它已被更正为使用动态配置。

Please try that out and see if the updated steps work for you. 请尝试一下,看看更新后的步骤是否适合您。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM