簡體   English   中英

Kubernetes Cinder卷無法通過cloud-provider = openstack掛載

[英]Kubernetes Cinder volumes do not mount with cloud-provider=openstack

我正在嘗試使用kubernetes的cinder插件來創建靜態定義的PV和StorageClasses,但是我發現群集和cinder之間沒有用於創建/安裝設備的活動。

Kubernetes版本:

kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

命令kubelet開始於及其狀態:

systemctl status kubelet -l
● kubelet.service - Kubelet service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
  Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
  Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
 Main PID: 2408 (kubelet)
   CGroup: /system.slice/kubelet.service
           ├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf

這是我的cloud.conf文件:

# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne

看來k8s能夠與openstack成功通信。 從/ var / log / messages:

kubelet: I1020 11:43:51.770948    2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642    2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679    2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688    2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332    2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]

我的PV / PVC yaml文件和煤渣列表輸出:

# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: jk-test
  labels:
    type: test
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  cinder:
    volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
    fsType: ext4

# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available |              jk-cinder              |  10  |      -      |  false   |          

如上所示,cinder報告具有在pv.yaml文件中引用的ID的設備可用。 當我創建它們時,似乎一切正常:

NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
pv/jk-test   10Gi       RWO           Retain          Bound     default/myclaim             5h
NAME               STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/myclaim        Bound     jk-test   10Gi       RWO           5h

然后,我嘗試使用pvc創建一個pod,但是無法裝入該卷:

# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: jk-test3
  labels:
    name: jk-test
spec:
  containers:
    - name: front-end
      image: example-front-end:latest
      ports:
        - hostPort: 6000
          containerPort: 3000
  volumes:
    - name: jk-test
      persistentVolumeClaim:
        claimName: myclaim

這是pod的狀態:

  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedMount     Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]

我已經驗證了我的openstack提供程序正在公開cinder v1和v2 API,並且openstack_instances之前的日志顯示nova API是可訪問的。 盡管如此,我從未見過任何關於k8s部件與煤渣或新星進行通信以裝載音量的嘗試。

我認為這是有關安裝失敗的相關日志消息:

kubelet: I1020 06:51:11.840341   24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424   24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474   24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420   24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566   24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.

我有沒有想念的東西嗎? 我已經按照此處的說明進行操作: k8s-mysql-cinder-pd示例,但尚未獲得任何通信。 作為另一個數據點,我嘗試定義k8s提供的Storage類,以下是相關的StorageClass和PVC文件:

# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: gold
provisioner: kubernetes.io/cinder
parameters:
  availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dynamicclaim
  annotations:
    volume.beta.kubernetes.io/storage-class: "gold"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

StorageClass報告成功,但是當我嘗試創建PVC時,它陷入了“掛起”狀態,並報告“沒有匹配的卷插件”:

# kubectl get storageclass
NAME      TYPE
gold      kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name:           dynamicclaim
Namespace:      default
Status:         Pending
Volume:
Labels:         <none>
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  1d            15s             5867    {persistentvolume-controller }                  Warning         ProvisioningFailed      no volume plugin matched

這與已加載插件的日志中的內容矛盾:

grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517   22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"

我的機器上安裝了nova和cinder客戶端:

# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder

感謝您的幫助,我敢肯定我在這里缺少一些簡單的東西。

謝謝!

煤渣卷可以肯定地與Kubernetes 1.5.0和1.5.3一起使用(我認為它們也可以在我第一次嘗試的1.4.6上工作,我不知道以前的版本)。

簡短答案

在您的Pod yaml文件中,您缺少: volumeMounts:部分。

更長的答案

第一種可能性:無PV或PVC

實際上,當您已經具有現有的煤渣卷時,您可以僅使用Pod(或部署),而無需PV或PVC。 示例: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888這將創建一個部署和一個豆莢。 煤渣體積將安裝到nginx容器中。 要驗證您是否正在使用卷,可以在nginx容器內的/usr/share/nginx/html/目錄內編輯文件,然后停止該容器。 Kubernetes將創建一個新容器,並在其中創建/usr/share/nginx/html/目錄中的文件,使其與停止的容器中的文件相同。

刪除“部署”資源后,不會刪除cinder卷,但會將其與vm分離。

第二種可能性:使用PV和PVC

另一種可能性是,如果您已經有煤渣體積,則可以使用PV和PVC資源。 您說過您想使用存儲類,盡管Kubernetes文檔允許不使用它:

沒有注釋或其類別注釋設置為“”的PV沒有類別,並且只能綁定到不要求特定類別的PVC

資源

一個示例的存儲類是: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: # to be used as value for annotation: # volume.beta.kubernetes.io/storage-class name: cinder-gluster-hdd provisioner: kubernetes.io/cinder parameters: # openstack volume type type: gluster_hdd # openstack availability zone availability: nova

然后,在PV中使用ID 48d2d1e6-e063-437a-855f-8b62b640a950的現有煤渣卷: apiVersion: v1 kind: PersistentVolume metadata: # name of a pv resource visible in Kubernetes, not the name of # a cinder volume name: pv0001 labels: pv-first-label: "123" pv-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain cinder: # ID of cinder volume volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950然后創建一個PVC,其標簽選擇器與PV的標簽相匹配: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vol-test labels: pvc-first-label: "123" pvc-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd" spec: accessModes: # the volume can be mounted as read-write by a single node - ReadWriteOnce resources: requests: storage: "1Gi" selector: matchLabels: pv-first-label: "123" pv-second-label: abc an d然后是Deployment: kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test

刪除k8s資源后,cinder卷不會被刪除,但會與vm分離。

使用PV可讓您設置persistentVolumeReclaimPolicy

第三種可能性:未創建煤渣量

如果您沒有創建煤渣卷,Kubernetes可以為您創建它。 然后,您必須提供PVC資源。 我不會描述此變體,因為沒有要求它。

放棄

我建議任何有興趣尋找最佳選擇的人都應該嘗試一下並比較這些方法。 另外,我僅出於更好地理解原因而使用了諸如pv-first-labelpvc-first-label類的標簽名稱。 您可以在任何地方使用例如first-label

鑒於docs中的以下聲明,我懷疑動態StorageClass方法不起作用,因為尚未實現Cinder配置器( http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner ):

存儲類具有一個供應商,該供應商確定用於供應PV的卷插件。 必須指定此字段。 在測試期間,可用的預配器類型為kubernetes.io/aws-ebs和kubernetes.io/gce-pd

至於為什么使用Cinder卷ID的靜態方法不起作用,我不確定。 我遇到了完全相同的問題。 Kubernetes 1.2似乎可以正常工作,而1.3和1.4則不能。 這似乎與1.3-beta2( https://github.com/kubernetes/kubernetes/pull/26801 )中的PersistentVolume處理方面的重大變化一致:

kubelet中引入了一個新的卷管理器,用於同步卷的安裝/卸載(如果未啟用附加/分離控制器,則附加/分離)。 (#26801,@ saad-ali)

這消除了吊艙創建循環和孤立卷循環之間的競爭條件。 它還會從syncPod()路徑中刪除卸載/分離操作,因此卷清理永遠不會阻塞syncPod循環。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM