简体   繁体   English

Kube.netes 从 GCE 持久磁盘卷配置 PVC 显示错误

[英]Kubernetes provisioning PVC from GCE Persistent disk volume shows error

I am using a GCE cluster with 2 nodes, which I set up using kubeadm.我正在使用具有 2 个节点的 GCE 集群,这是我使用 kubeadm 设置的。 Now I want to set up a persistent volume for postgresql to be deployed.现在我想为要部署的 postgresql 设置一个持久卷。 I created a PVC and PV with a storageClass and also created a disk space with 10G in name postgres in the same project.Iam attaching the scripts for the PVC,PV,and Deployment below.Also I am using a service account that have the access to the disks.我创建了一个带有存储类的 PVC 和 PV,还在同一个项目中创建了一个名为postgres的 10G 磁盘空间。我在下面附加了 PVC、PV 和部署的脚本。我还使用了一个具有访问权限的服务帐户到磁盘。

1.Deployment.yml 1.部署.yml

apiVersion: apps/v1
kind: Deployment 
metadata:
  name: kyc-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - image: "postgres:9.6.2"
        name: postgres
        ports:
        - containerPort: 5432
          name: postgres
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/db-data
      volumes:
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: kyc-postgres-pvc

2.PersistentVolumeClaim.yml 2.PersistentVolumeClaim.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kyc-postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: standard

3.PersistentVolume.yml 3.持久卷.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kyc-postgres-pv
  annotations:
    kubernetes.io/createdby: gce-pd-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
  finalizers:
  - kubernetes.io/pv-protection
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: kyc-postgres-pvc
    namespace: default
  gcePersistentDisk:
    fsType: NTFS
    pdName: postgres
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: failure-domain.beta.kubernetes.io/zone
          operator: In
          values:
          - us-central1-a
        - key: failure-domain.beta.kubernetes.io/region
          operator: In
          values:
          - us-central1-a
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
  volumeMode: Filesystem
status:
  phase: Bound
  1. StorageClass.yml存储类.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a

Now when I create these volumes and deployments, the pod is not getting started properly.Iam getting the following errors when I tired creating deployments.现在,当我创建这些卷和部署时,pod 无法正常启动。我在创建部署时遇到以下错误。

Failed to get GCE GCECloudProvider with error <nil>

Also Iam attaching my output for kubectl get sc另外我附上我的 output 用于kubectl get sc

NAME       PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard   kubernetes.io/gce-pd   Delete          Immediate           false                  10m

Can someone help me with this.Thanks in advance for your time – if I've missed out anything, over- or under-emphasised a specific point let me know in the comments.有人可以帮我解决这个问题吗?提前感谢您抽出宝贵的时间——如果我遗漏了什么,过分强调或过分强调了某个特定点,请在评论中告诉我。

Your PersistentVolumeClaim does not specify a storageClassName , so I suppose you may want to use the default StorageClass .您的PersistentVolumeClaim没有指定storageClassName ,所以我想您可能想要使用默认的StorageClass When using a default StorageClass, you don't need to create a PersistentVolume resource, that will be provided dynamically from the Google Cloud Platform.使用默认 StorageClass 时,您无需创建PersistentVolume资源,该资源将从 Google Cloud Platform 动态提供。 (Or is there any specific reason you don't want to use the default StorageClass?) (或者您是否有任何特定原因不想使用默认的 StorageClass?)

Using the GCECloudProvider in Kube.netes outside of the Google Kube.netes Engine has the following prerequisites:在 Google Kube.netes Engine 之外使用 Kube.netes 中的 GCECloudProvider 具有以下先决条件:

  1. The VM needs to be run with a service account that has the right to provision disks. VM 需要使用有权提供磁盘的服务帐户运行。 Info on how to run a VM with a service account can be found here有关如何使用服务帐户运行 VM 的信息,请参见此处

  2. The Kubelet needs to run with the argument --cloud-provider=gce . Kubelet 需要使用argument --cloud-provider=gce运行。 For this the KUBELET_KUBECONFIG_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf have to be edited.为此,必须编辑/etc/systemd/system/kubelet.service.d/10-kubeadm.conf中的KUBELET_KUBECONFIG_ARGS The Kubelet can then be restarted with sudo systemctl restart kubelet然后可以使用sudo systemctl restart kubelet

  3. The Kube.netes cloud-config file needs to be configured.需要配置 Kube.netes cloud-config 文件。 The file can be found at /etc/kube.netes/cloud-config and the following content is enough to get the cloud provider to work:该文件可以在/etc/kube.netes/cloud-config中找到,以下内容足以让云提供商工作:

     [Global] project-id = "<google-project-id>"
  4. Kubeadm needs to have GCE configured as its cloud provider .However, the nodeName has to be changed. Kubeadm 需要将 GCE 配置为其云提供者。但是,必须更改nodeName Edit the config file and upload it to cluster via kubeadm config upload from-file编辑配置文件并通过kubeadm config upload from-file上传到集群

    cloudProvider: gce

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何从 GKE 中的持久卷中删除数据? - How to delete data from a persistent volume in GKE? GCE 仅显示约 15% 的使用率,但 top 显示约 99% - GCE shows only ~15% usage but top shows ~99% 永久磁盘丢失一些数据 - Persistent disk losing some data GCP Dataflow 中如何确定永久性磁盘的使用情况? - How is persistent disk use determined in GCP Dataflow? Kubernetes:无法将未格式化的卷挂载为只读 - Kubernetes: failed to mount unformatted volume as read only 如何在 GKE 中查找哪个 pod 正在使用 Persistent Volume Claim - How to find which pod is using a Persistent Volume Claim in GKE 在启动依赖于磁盘中数据的 VM 之前,如何将文件放入永久磁盘? - How can I put files into a persistent disk before starting up a VM that relies on data in the disk? Google Cloud - 将永久性磁盘快照复制到另一个项目 - Google Cloud - Copy Persistent Disk Snapshots To Another Project ansible-inventory --list 命令失败并出现 gce 插件错误:gce 库存插件无法启动 - ansible-inventory --list command failing with gce plugin error: gce inventory plugin cannot start Kubernetes Podspec 用于特权容器和容器的 /mnt 和 /dev 的卷挂载 - Kubernetes Podspec for Privileged Container and volume mounts of /mnt and /dev of container
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM