[英]Kubernetes provisioning PVC from GCE Persistent disk volume shows error
I am using a GCE cluster with 2 nodes, which I set up using kubeadm.我正在使用具有 2 个节点的 GCE 集群,这是我使用 kubeadm 设置的。 Now I want to set up a persistent volume for postgresql to be deployed.
现在我想为要部署的 postgresql 设置一个持久卷。 I created a PVC and PV with a storageClass and also created a disk space with 10G in name postgres in the same project.Iam attaching the scripts for the PVC,PV,and Deployment below.Also I am using a service account that have the access to the disks.
我创建了一个带有存储类的 PVC 和 PV,还在同一个项目中创建了一个名为postgres的 10G 磁盘空间。我在下面附加了 PVC、PV 和部署的脚本。我还使用了一个具有访问权限的服务帐户到磁盘。
1.Deployment.yml 1.部署.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kyc-postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: "postgres:9.6.2"
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/db-data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: kyc-postgres-pvc
2.PersistentVolumeClaim.yml 2.PersistentVolumeClaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kyc-postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
3.PersistentVolume.yml 3.持久卷.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: kyc-postgres-pv
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
finalizers:
- kubernetes.io/pv-protection
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: kyc-postgres-pvc
namespace: default
gcePersistentDisk:
fsType: NTFS
pdName: postgres
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-central1-a
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- us-central1-a
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
volumeMode: Filesystem
status:
phase: Bound
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-central1-a
Now when I create these volumes and deployments, the pod is not getting started properly.Iam getting the following errors when I tired creating deployments.现在,当我创建这些卷和部署时,pod 无法正常启动。我在创建部署时遇到以下错误。
Failed to get GCE GCECloudProvider with error <nil>
Also Iam attaching my output for kubectl get sc
另外我附上我的 output 用于
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard kubernetes.io/gce-pd Delete Immediate false 10m
Can someone help me with this.Thanks in advance for your time – if I've missed out anything, over- or under-emphasised a specific point let me know in the comments.有人可以帮我解决这个问题吗?提前感谢您抽出宝贵的时间——如果我遗漏了什么,过分强调或过分强调了某个特定点,请在评论中告诉我。
Your PersistentVolumeClaim
does not specify a storageClassName
, so I suppose you may want to use the default StorageClass .您的
PersistentVolumeClaim
没有指定storageClassName
,所以我想您可能想要使用默认的StorageClass 。 When using a default StorageClass, you don't need to create a PersistentVolume
resource, that will be provided dynamically from the Google Cloud Platform.使用默认 StorageClass 时,您无需创建
PersistentVolume
资源,该资源将从 Google Cloud Platform 动态提供。 (Or is there any specific reason you don't want to use the default StorageClass?) (或者您是否有任何特定原因不想使用默认的 StorageClass?)
Using the GCECloudProvider in Kube.netes outside of the Google Kube.netes Engine has the following prerequisites:在 Google Kube.netes Engine 之外使用 Kube.netes 中的 GCECloudProvider 具有以下先决条件:
The VM needs to be run with a service account that has the right to provision disks. VM 需要使用有权提供磁盘的服务帐户运行。 Info on how to run a VM with a service account can be found here
有关如何使用服务帐户运行 VM 的信息,请参见此处
The Kubelet needs to run with the argument --cloud-provider=gce
. Kubelet 需要使用
argument --cloud-provider=gce
运行。 For this the KUBELET_KUBECONFIG_ARGS
in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
have to be edited.为此,必须编辑
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
中的KUBELET_KUBECONFIG_ARGS
。 The Kubelet can then be restarted with sudo systemctl restart kubelet
然后可以使用
sudo systemctl restart kubelet
The Kube.netes cloud-config file needs to be configured.需要配置 Kube.netes cloud-config 文件。 The file can be found at
/etc/kube.netes/cloud-config
and the following content is enough to get the cloud provider to work:该文件可以在
/etc/kube.netes/cloud-config
中找到,以下内容足以让云提供商工作:
[Global] project-id = "<google-project-id>"
Kubeadm needs to have GCE configured as its cloud provider .However, the nodeName has to be changed. Kubeadm 需要将 GCE 配置为其云提供者。但是,必须更改nodeName 。 Edit the config file and upload it to cluster via
kubeadm config upload from-file
编辑配置文件并通过
kubeadm config upload from-file
上传到集群
cloudProvider: gce
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.