[英]Kubernetes stateful set are not using storage class to create persistance volume
I am new to kubenetes. 我是kubenetes的新手。 I have setup a cluster of kubernetes on two machine. 我已经在两台机器上设置了一个Kubernetes集群。 and when I am deploying pods using stateful set.But kubernetes is not creating pvc. 当我使用有状态集部署Pod时。但是kubernetes没有创建pvc。
I am doing POC for installing redis cluster on kubernets cluster, So For that I have downloaded a stateful set from below site url. 我正在做POC以在kubernets集群上安装Redis集群,因此为此,我从站点URL下方下载了有状态集。 [ https://medium.com/zero-to/setup-persistence-redis-cluster-in-kubertenes-7d5b7ffdbd98] [ https://medium.com/zero-to/setup-persistence-redis-cluster-in-kubertenes-7d5b7ffdbd98]
This stateful set was working fine with minikube , but when I am deploying it on kubernetes cluster(I have created with 2 machine) It is giving below error: 这个有状态的集合可以在minikube上正常工作,但是当我在kubernetes集群(我已经用2台机器创建)上部署它时,它给出了以下错误:
root@xen-727:/usr/local/bin# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-cluster-0 0/1 Pending 0 13m
root@xen-727:/usr/local/bin# kubectl describe pod redis-cluster-0
Name: redis-cluster-0
Namespace: default
Node: /
Labels: app=redis-cluster
controller-revision-hash=redis-cluster-b5b75cc79
statefulset.kubernetes.io/pod-name=redis-cluster-0
Annotations: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
redis-cluster:
Image: tiroshanm/kubernetes-redis-cluster:latest
Ports: 6379/TCP, 16379/TCP
Command:
/usr/local/bin/redis-server
Args:
/redis-conf/redis.conf
Liveness: exec [sh -c redis-cli -h $(hostname) ping] delay=20s timeout=1s period=3s #success=1 #failure=3
Readiness: exec [sh -c redis-cli -h $(hostname) ping] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h22jv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-redis-cluster-0
ReadOnly: false
default-token-h22jv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h22jv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready=:Exists:NoExecute for 300s
node.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 14m 4 default-scheduler Warning FailedScheduling pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
root@xen-727:/usr/local/bin# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
data-redis-cluster-0 Pending slow 15m
root@xen-727:/usr/local/bin# kubectl get pv
No resources found.
I created one storage class : 我创建了一个存储类:
root@xen-727:/usr/local/bin# kubectl get sc
NAME TYPE
slow (default) kubernetes.io/gce-pd
But After search a lot , It seems that kubernetes is not using this storage class to create pv. 但是经过大量搜索后,看来kubernetes没有使用此存储类来创建pv。
storage class code: 存储类代码:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
below is my complete code: 下面是我的完整代码:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
template:
metadata:
labels:
app: redis-cluster
annotations:
spec:
containers:
- name: redis-cluster
image: tiroshanm/kubernetes-redis-cluster:latest
imagePullPolicy: Always
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/usr/local/bin/redis-server"]
args: ["/redis-conf/redis.conf"]
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-cluster
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Mi
Expected output: It should create 6 nodes, with 6 pvc and 6 pv. 预期的输出:它应该创建6个节点,分别具有6个pvc和6个pv。
You need to create a storage that you are requesting with PersistentVolumeClaim
. 您需要使用PersistentVolumeClaim
创建要请求的存储。
Example of Volume types are available here . 此处提供卷类型的示例。
A
PersistentVolume
(PV) is a piece of storage in the cluster that has been provisioned by an administrator.PersistentVolume
(PV)是群集中由管理员配置的一部分存储。 It is a resource in the cluster just like a node is a cluster resource. 它是群集中的资源,就像节点是群集资源一样。 PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. PV是类似于Volumes的卷插件,但是其生命周期独立于使用PV的任何单个容器。 This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. 此API对象捕获NFS,iSCSI或特定于云提供商的存储系统的存储实现的详细信息。A
PersistentVolumeClaim
(PVC) is a request for storage by a user.PersistentVolumeClaim
(PVC)是用户存储请求。 It is similar to a pod. 它类似于吊舱。 Pods consume node resources and PVCs consume PV resources. 容器消耗节点资源,PVC消耗PV资源。 Pods can request specific levels of resources (CPU and Memory). Pod可以请求特定级别的资源(CPU和内存)。 Claims can request specific size and access modes (eg, can be mounted once read/write or many times read-only). 声明可以请求特定的大小和访问模式(例如,可以一次读取/写入或多次只读安装)。
If you are on GCE , you can use gcePersistentDisk
如果您使用的是GCE ,则可以使用gcePersistentDisk
A
gcePersistentDisk
volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.gcePersistentDisk
卷将Google Compute Engine(GCE) 永久磁盘装载到您的Pod中。 UnlikeemptyDir
, which is erased when a Pod is removed, the contents of a PD are preserved and the volume is merely unmounted. 与emptyDir
不同,emptyDir
在除去Pod时将被擦除,它保留了PD的内容,而只是卸载了该卷。 This means that a PD can be pre-populated with data, and that data can be “handed off” between Pods. 这意味着可以在PD中预填充数据,并且可以在Pod之间“传递”数据。
You need to use the gcloud
command to create a drive inside the GCE : 您需要使用gcloud
命令在GCE内创建驱动器:
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
And using it inside a POD
, like in the example below: 并在POD
使用它,如下例所示:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
If you prefer, you can setup your own nfs
server and use it inside Kubernetes, an example on how to set it up, is available here . 如果愿意,可以设置自己的nfs
服务器并在Kubernetes中使用它,有关如何设置它的示例,请参见此处 。
You can also check the documentation on how to use volumes on AWS . 您还可以查看有关如何在AWS上使用卷的文档。
Hope this will be enough to help you. 希望这足以帮助您。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.