简体   繁体   English

Kubernetes Cinder卷无法通过cloud-provider = openstack挂载

[英]Kubernetes Cinder volumes do not mount with cloud-provider=openstack

I am trying to use the cinder plugin for kubernetes to create both statically defined PVs as well as StorageClasses, but I see no activity between my cluster and cinder for creating/mounting the devices. 我正在尝试使用kubernetes的cinder插件来创建静态定义的PV和StorageClasses,但是我发现群集和cinder之间没有用于创建/安装设备的活动。

Kubernetes Version: Kubernetes版本:

kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

The command kubelet was started with and its status: 命令kubelet开始于及其状态:

systemctl status kubelet -l
● kubelet.service - Kubelet service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
  Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
  Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
 Main PID: 2408 (kubelet)
   CGroup: /system.slice/kubelet.service
           ├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf

Here is my cloud.conf file: 这是我的cloud.conf文件:

# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne

It appears that k8s is able to communicate successfully with openstack. 看来k8s能够与openstack成功通信。 From /var/log/messages: 从/ var / log / messages:

kubelet: I1020 11:43:51.770948    2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642    2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679    2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688    2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332    2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]

My PV/PVC yaml files, and cinder list output: 我的PV / PVC yaml文件和煤渣列表输出:

# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: jk-test
  labels:
    type: test
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  cinder:
    volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
    fsType: ext4

# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available |              jk-cinder              |  10  |      -      |  false   |          

As seen above, cinder reports the device with the ID referenced in the pv.yaml file is available. 如上所示,cinder报告具有在pv.yaml文件中引用的ID的设备可用。 When I create them, things seem to work: 当我创建它们时,似乎一切正常:

NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
pv/jk-test   10Gi       RWO           Retain          Bound     default/myclaim             5h
NAME               STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/myclaim        Bound     jk-test   10Gi       RWO           5h

Then I try to create a pod using the pvc, but it fails to mount the volume: 然后,我尝试使用pvc创建一个pod,但是无法装入该卷:

# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: jk-test3
  labels:
    name: jk-test
spec:
  containers:
    - name: front-end
      image: example-front-end:latest
      ports:
        - hostPort: 6000
          containerPort: 3000
  volumes:
    - name: jk-test
      persistentVolumeClaim:
        claimName: myclaim

And here is the state of the pod: 这是pod的状态:

  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedMount     Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]

I've verified that my openstack provider is exposing cinder v1 and v2 APIs and the previous logs from openstack_instances show the nova API is accessible. 我已经验证了我的openstack提供程序正在公开cinder v1和v2 API,并且openstack_instances之前的日志显示nova API是可访问的。 Despite that, I never see any attempts on k8s part to communicate with cinder or nova to mount the volume. 尽管如此,我从未见过任何关于k8s部件与煤渣或新星进行通信以装载音量的尝试。

Here are what I think are the relevant log messages regarding the failure to mount: 我认为这是有关安装失败的相关日志消息:

kubelet: I1020 06:51:11.840341   24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424   24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474   24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420   24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566   24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.

Is there a piece I am missing? 我有没有想念的东西吗? I've followed the instructions here: k8s - mysql-cinder-pd example But haven't been able to get any communication. 我已经按照此处的说明进行操作: k8s-mysql-cinder-pd示例,但尚未获得任何通信。 As another datapoint I tried defining a Storage class as provided by k8s, here are the associated StorageClass and PVC files: 作为另一个数据点,我尝试定义k8s提供的Storage类,以下是相关的StorageClass和PVC文件:

# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: gold
provisioner: kubernetes.io/cinder
parameters:
  availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dynamicclaim
  annotations:
    volume.beta.kubernetes.io/storage-class: "gold"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

The StorageClass reports success, but when I try to create the PVC it gets stuck in the 'pending' state and reports 'no volume plugin matched': StorageClass报告成功,但是当我尝试创建PVC时,它陷入了“挂起”状态,并报告“没有匹配的卷插件”:

# kubectl get storageclass
NAME      TYPE
gold      kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name:           dynamicclaim
Namespace:      default
Status:         Pending
Volume:
Labels:         <none>
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  1d            15s             5867    {persistentvolume-controller }                  Warning         ProvisioningFailed      no volume plugin matched

This contradicts whats in the logs for plugins that were loaded: 这与已加载插件的日志中的内容矛盾:

grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517   22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"

And I have the nova and cinder clients installed on my machine: 我的机器上安装了nova和cinder客户端:

# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder

Any help is appreciated, I'm sure I'm missing something simple here. 感谢您的帮助,我敢肯定我在这里缺少一些简单的东西。

Thanks! 谢谢!

The cinder volumes work for sure with Kubernetes 1.5.0 and 1.5.3 (I think they also worked on 1.4.6 on which I was first experimenting, I don't know about previous versions). 煤渣卷可以肯定地与Kubernetes 1.5.0和1.5.3一起使用(我认为它们也可以在我第一次尝试的1.4.6上工作,我不知道以前的版本)。

Short answer 简短答案

In your Pod yaml file you were missing: volumeMounts: section. 在您的Pod yaml文件中,您缺少: volumeMounts:部分。

Longer answer 更长的答案

First possibility: no PV or PVC 第一种可能性:无PV或PVC

Actually, when you already have an existing cinder volume, you can just use a Pod (or Deployment), no PV or PVC is needed. 实际上,当您已经具有现有的煤渣卷时,您可以仅使用Pod(或部署),而无需PV或PVC。 Example: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888 示例: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888 This will create a Deployment and a Pod. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888这将创建一个部署和一个豆荚。 The cinder volume will be mounted into the nginx container. 煤渣体积将安装到nginx容器中。 To verify that you are using a volume, you can edit a file inside nginx container, inside /usr/share/nginx/html/ directory and stop the container. 要验证您是否正在使用卷,可以在nginx容器内的/usr/share/nginx/html/目录内编辑文件,然后停止该容器。 Kubernetes will create a new container and inside it, the files in /usr/share/nginx/html/ directory will be the same as they were in the stopped container. Kubernetes将创建一个新容器,并在其中创建/usr/share/nginx/html/目录中的文件,使其与停止的容器中的文件相同。

After you delete the Deployment resource, the cinder volume is not deleted, but it is detached from a vm. 删除“部署”资源后,不会删除cinder卷,但会将其与vm分离。

Second possibility: with PV and PVC 第二种可能性:使用PV和PVC

Other possibility, if you already have an existing cinder volume, you can use PV and PVC resources. 另一种可能性是,如果您已经有煤渣体积,则可以使用PV和PVC资源。 You said you want to use a storage class, though Kubernetes docs allow not using it: 您说过您想使用存储类,尽管Kubernetes文档允许不使用它:

A PV with no annotation or its class annotation set to "" has no class and can only be bound to PVCs that request no particular class 没有注释或其类别注释设置为“”的PV没有类别,并且只能绑定到不要求特定类别的PVC

source 资源

An example storage-class is: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: # to be used as value for annotation: # volume.beta.kubernetes.io/storage-class name: cinder-gluster-hdd provisioner: kubernetes.io/cinder parameters: # openstack volume type type: gluster_hdd # openstack availability zone availability: nova 一个示例的存储类是: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: # to be used as value for annotation: # volume.beta.kubernetes.io/storage-class name: cinder-gluster-hdd provisioner: kubernetes.io/cinder parameters: # openstack volume type type: gluster_hdd # openstack availability zone availability: nova

Then, you use your existing cinder volume with ID 48d2d1e6-e063-437a-855f-8b62b640a950 in a PV: apiVersion: v1 kind: PersistentVolume metadata: # name of a pv resource visible in Kubernetes, not the name of # a cinder volume name: pv0001 labels: pv-first-label: "123" pv-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain cinder: # ID of cinder volume volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950 Then create a PVC, which labels selector matches the labels of the PV: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vol-test labels: pvc-first-label: "123" pvc-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd" spec: accessModes: # the volume can be mounted as read-write by a single node - ReadWriteOnce resources: requests: storage: "1Gi" selector: matchLabels: pv-first-label: "123" pv-second-label: abc an 然后,在PV中使用ID 48d2d1e6-e063-437a-855f-8b62b640a950的现有煤渣卷: apiVersion: v1 kind: PersistentVolume metadata: # name of a pv resource visible in Kubernetes, not the name of # a cinder volume name: pv0001 labels: pv-first-label: "123" pv-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain cinder: # ID of cinder volume volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950然后创建一个PVC,其标签选择器与PV的标签相匹配: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vol-test labels: pvc-first-label: "123" pvc-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd" spec: accessModes: # the volume can be mounted as read-write by a single node - ReadWriteOnce resources: requests: storage: "1Gi" selector: matchLabels: pv-first-label: "123" pv-second-label: abc an d then a Deployment: kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test d然后是Deployment: kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test

After you delete the k8s resources, the cinder volume is not deleted, but it is detached from a vm. 删除k8s资源后,cinder卷不会被删除,但会与vm分离。

Using a PV lets you set persistentVolumeReclaimPolicy . 使用PV可让您设置persistentVolumeReclaimPolicy

Third possibility: no cinder volume created 第三种可能性:未创建煤渣量

If you don't have a cinder volume created, Kubernetes can create it for you. 如果您没有创建煤渣卷,Kubernetes可以为您创建它。 You have to then provide a PVC resource. 然后,您必须提供PVC资源。 I won't describe this variant, since it was not asked for. 我不会描述此变体,因为没有要求它。

Disclaimer 放弃

I suggest that anyone interested in finding the best option should experiment themselves and compare the methods. 我建议任何有兴趣寻找最佳选择的人都应该尝试一下并比较这些方法。 Also, I used the labels names like pv-first-label and pvc-first-label only for better understanding reasons. 另外,我仅出于更好地理解原因而使用了诸如pv-first-labelpvc-first-label类的标签名称。 You can use eg first-label everywhere. 您可以在任何地方使用例如first-label

I get the suspicion that the dynamic StorageClass approach is not working, because the Cinder provisioner is not implemented yet, given the following statement in the docs ( http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner ): 鉴于docs中的以下声明,我怀疑动态StorageClass方法不起作用,因为尚未实现Cinder配置器( http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner ):

Storage classes have a provisioner that determines what volume plugin is used for provisioning PVs. 存储类具有一个供应商,该供应商确定用于供应PV的卷插件。 This field must be specified. 必须指定此字段。 During beta, the available provisioner types are kubernetes.io/aws-ebs and kubernetes.io/gce-pd 在测试期间,可用的预配器类型为kubernetes.io/aws-ebs和kubernetes.io/gce-pd

As for why the static method using Cinder volume IDs is not working, I'm not sure. 至于为什么使用Cinder卷ID的静态方法不起作用,我不确定。 I'm running into the exact same problem. 我遇到了完全相同的问题。 Kubernetes 1.2 seems to work fine, 1.3 and 1.4 do not. Kubernetes 1.2似乎可以正常工作,而1.3和1.4则不能。 This seems to coincide with the major change in PersistentVolume handling in 1.3-beta2 ( https://github.com/kubernetes/kubernetes/pull/26801 ): 这似乎与1.3-beta2( https://github.com/kubernetes/kubernetes/pull/26801 )中的PersistentVolume处理方面的重大变化一致:

A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled). kubelet中引入了一个新的卷管理器,用于同步卷的安装/卸载(如果未启用附加/分离控制器,则附加/分离)。 (#26801, @saad-ali) (#26801,@ saad-ali)

This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. 这消除了吊舱创建循环和孤立卷循环之间的竞争条件。 It also removes the unmount/detach from the syncPod() path so volume clean up never blocks the syncPod loop. 它还会从syncPod()路径中删除卸载/分离操作,因此卷清理永远不会阻塞syncPod循环。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kubernetes:从一个云提供商处使用OpenStack Cinder,而在另一个云提供商处使用节点 - Kubernetes: using OpenStack Cinder from one cloud provider while nodes on another 使用cloud-provider = aws的Kubernetes“无法注册节点” - Kubernetes “Unable to register node” with cloud-provider=aws Kubernetes Openstack云提供商失败并出现恐慌 - Kubernetes Openstack cloud provider fails with panic 如果在--cloud-provider = aws标志中使用kubeadm init,则kubernetes控制器管理器错误 - kubernetes controller manager error if use kubeadm init with --cloud-provider=aws flag 如何在 Live Cluster 上设置 cloud-provider - How to setup cloud-provider on Live Cluster Kubernetes:无法为Pod挂载卷 - Kubernetes: Unable to mount volumes for pod Kubernetes 无法为 pod 挂载卷 - Kubernetes Unable to mount volumes for pod 在kubernetes + coreos中挂载RDB卷 - Mount RDB volumes in kubernetes + coreos Kubernetes-无法使用云提供商openstack启动kubelet(从云提供商获取当前节点名称时出错) - Kubernetes - unable to start kubelet with cloud provider openstack (error fetching current node name from cloud provider) 供应商kubernetes.io/cinder与openstack.org/standalone-cinder之间的区别 - the difference between provisioner kubernetes.io/cinder from openstack.org/standalone-cinder
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM