簡體   English   中英

Google Kubernetes Engine:未在實例中看到掛載持久卷

[英]Google Kubernetes Engine: Not seeing mount persistent volume in the instance

我創建了一個200G磁盤,命令為gcloud compute disks create --size 200GB my-disk

然后創建了一個PersistentVolume

apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-volume
    spec:
      capacity:
        storage: 200Gi
      accessModes:
        - ReadWriteOnce
      gcePersistentDisk:
        pdName: my-disk
        fsType: ext4

然后創建了PersistentVolumeClaim

apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-claim
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 200Gi

然后創建一個StatefulSet並將卷掛載到/ mnt / disks,這是一個現有目錄。 statefulset.yaml:

apiVersion: apps/v1beta2
    kind: StatefulSet
    metadata:
      name: ...
    spec:
        ...
        spec:
          containers:
          - name: ...
            ...
            volumeMounts:
            - name: my-volume
              mountPath: /mnt/disks
          volumes:
          - name: my-volume
            emptyDir: {}
      volumeClaimTemplates:
      - metadata:
          name: my-claim
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 200Gi

我運行命令kubectl get pv並看到該磁盤已成功安裝到每個實例

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                    STORAGECLASS   REASON    AGE
    my-volume                                  200Gi      RWO            Retain           Available                                                                     19m
    pvc-17c60f45-2e4f-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim-xxx_1   standard                 13m
    pvc-5972c804-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim                         standard                 18m
    pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claimxxx_0   standard                 18m

但是當我進入一個實例並運行df -hT ,我看不到已安裝的卷。 以下是輸出:

Filesystem     Type      Size  Used Avail Use% Mounted on
    /dev/root      ext2      1.2G  447M  774M  37% /
    devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
    tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm
    tmpfs          tmpfs     1.9G  744K  1.9G   1% /run
    tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
    tmpfs          tmpfs     1.9G     0  1.9G   0% /tmp
    tmpfs          tmpfs     256K     0  256K   0% /mnt/disks
    /dev/sda8      ext4       12M   28K   12M   1% /usr/share/oem
    /dev/sda1      ext4       95G  3.5G   91G   4% /mnt/stateful_partition
    tmpfs          tmpfs     1.0M  128K  896K  13% /var/lib/cloud
    overlayfs      overlay   1.0M  148K  876K  15% /etc

有誰有任何想法?

另外值得一提的是,我正在嘗試將磁盤掛載到在kubernete引擎中運行的docker映像。 該pod使用以下命令創建:

docker build -t gcr.io/xxx .
    gcloud docker -- push gcr.io/xxx
    kubectl create -f statefulset.yaml

我遇到的實例是運行docker鏡像的實例。 我沒有在實例和docker容器中看到卷

更新我找到了卷,我在實例中運行了df -ahT ,並看到了相關的條目

/dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e

然后我進入了碼頭工人的容器並運行了df -ahT ,我得到了

Filesystem     Type     Size  Used Avail Use% Mounted on
    /dev/sda1      ext4      95G  3.5G   91G   4% /mnt/disks

為什么我看到95G總尺寸而不是200G,這是我的音量大小?

更多信息: kubectl describe pod

Name:           xxx-replicaset-0
    Namespace:      default
    Node:           gke-xxx-cluster-default-pool-5e49501c-nrzt/10.128.0.17
    Start Time:     Fri, 23 Mar 2018 11:40:57 -0400
    Labels:         app=xxx-replicaset
                    controller-revision-hash=xxx-replicaset-755c4f7cff
    Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"xxx-replicaset","uid":"d6c3511f-2eaf-11e8-b14e-42010af0000...
                    kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container xxx-deployment
    Status:         Running
    IP:             10.52.4.5
    Created By:     StatefulSet/xxx-replicaset
    Controlled By:  StatefulSet/xxx-replicaset
    Containers:
      xxx-deployment:
        Container ID:   docker://137b3966a14538233ed394a3d0d1501027966b972d8ad821951f53d9eb908615
        Image:          gcr.io/sampeproject/xxxstaging:v1
        Image ID:       docker-pullable://gcr.io/sampeproject/xxxstaging@sha256:a96835c2597cfae3670a609a69196c6cd3d9cc9f2f0edf5b67d0a4afdd772e0b
        Port:           8080/TCP
        State:          Running
          Started:      Fri, 23 Mar 2018 11:42:17 -0400
        Ready:          True
        Restart Count:  0
        Requests:
          cpu:        100m
        Environment:  
        Mounts:
          /mnt/disks from my-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-hj65g (ro)
    Conditions:
      Type           Status
      Initialized    True
      Ready          True
      PodScheduled   True
    Volumes:
      my-claim:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  my-claim-xxx-replicaset-0
        ReadOnly:   false
      my-volume:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      default-token-hj65g:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-hj65g
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  
    Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                     node.alpha.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason                 Age                From                                                      Message
      ----     ------                 ----               ----                                                      -------
      Warning  FailedScheduling       10m (x4 over 10m)  default-scheduler                                         PersistentVolumeClaim is not bound: "my-claim-xxx-replicaset-0" (repeated 5 times)
      Normal   Scheduled              9m                 default-scheduler                                         Successfully assigned xxx-replicaset-0 to gke-xxx-cluster-default-pool-5e49501c-nrzt
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "my-volume"
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "default-token-hj65g"
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "pvc-902c57c5-2eb0-11e8-b14e-42010af0000e"
      Normal   Pulling                9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  pulling image "gcr.io/sampeproject/xxxstaging:v1"
      Normal   Pulled                 8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Successfully pulled image "gcr.io/sampeproject/xxxstaging:v1"
      Normal   Created                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Created container
      Normal   Started                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Started container

好像它沒有安裝正確的音量。 我在lsblk容器中運行了lsblk

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

    sda       8:0    0  100G  0 disk 
    ├─sda1    8:1    0 95.9G  0 part /mnt/disks
    ├─sda2    8:2    0   16M  0 part 
    ├─sda3    8:3    0    2G  0 part 
    ├─sda4    8:4    0   16M  0 part 
    ├─sda5    8:5    0    2G  0 part 
    ├─sda6    8:6    0  512B  0 part 
    ├─sda7    8:7    0  512B  0 part 
    ├─sda8    8:8    0   16M  0 part 
    ├─sda9    8:9    0  512B  0 part 
    ├─sda10   8:10   0  512B  0 part 
    ├─sda11   8:11   0    8M  0 part 
    └─sda12   8:12   0   32M  0 part 
    sdb       8:16   0  200G  0 disk

為什么會這樣?

使用PVC時,K8會為您管理永久磁盤。

PVs可以通過存儲類中的供應商定義的確切方式。 由於您使用GKE,因此默認SC使用kubernetes.io/gce-pd provisioner( https://kubernetes.io/docs/concepts/storage/storage-classes/#gce )。

換句話說,對於每個pod,創建新的PV。

如果您想使用現有磁盤,可以使用Volumes而不是PVC( https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk

PVC未安裝到您的容器中,因為您實際上沒有在容器的volumeMounts中指定PVC。 僅指定了emptyDir卷。

我實際上最近修改了GKE StatefulSet 教程 之前,有些步驟不正確,並說要手動創建PD和PV對象。 相反,它已被更正為使用動態配置。

請嘗試一下,看看更新后的步驟是否適合您。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM