简体   繁体   English

带有 NFS 持久卷的 Kubernetes statefulset

[英]Kubernetes statefulset with NFS persistent volume

I have a kubernetes cluster and I have a simple deployment for mongodb with NFS persistent volume set.我有一个kubernetes集群,我有一个带有NFS持久卷集的mongodb的简单部署。 It works fine, but since resources like databases are stateful I thought of using Statefulset for the mongodb , but now the problem is, when I go through the documentation, statefulset has volumeClaimTemplates instead of volumes (in deployments).它工作正常,但由于像数据库这样的资源是stateful我想将Statefulset用于mongodb ,但现在的问题是,当我浏览文档时,statefulset 有volumeClaimTemplates而不是volumes (在部署中)。

But now the problem comes.但是现在问题来了。

in a deployment do it like this:deployment这样做:

PersistentVolume -> PersistentVolumeClaim -> Deployment PersistentVolume -> PersistentVolumeClaim -> Deployment

But how can we do this in Statefulset ?但是我们如何在Statefulset做到这一点?

Is it like:是不是像:

volumeClaimTemplates -> StatefulSet volumeClaimTemplates -> StatefulSet

How can I set a PersistentVolume for the volumeClaimTemplates .如何为volumeClaimTemplates设置PersistentVolume If we don't use PersistentVolume for StatefulSet , how does it create he volume and WHERE does it create the volumes?如果我们不将PersistentVolume用于StatefulSet ,它如何创建卷以及它在哪里创建卷? Is in host machines (ie kubernetes worker nodes)?是否在host (即 kubernetes 工作节点)中?

Because I have a separate NFS provisioner I am using for the mongodb deployment (with replicasset=1), how can I use the same setup with StatefulSet ?因为我有一个单独的NFS配置器用于mongodb部署(replicasset=1),我如何使用与StatefulSet相同的设置?

Here's the my mongo-deployment.yaml -> which I am going to transform into a statefulset as shown in the second code snippet ( mongo-stateful.yaml )这是我的mongo-deployment.yaml -> 我要将其转换为 statefulset,如第二个代码片段 ( mongo-stateful.yaml ) 所示

  1. mongo-deployment.yaml
<omitted>
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: <nfs-server-ip>
    path: "/srv/nfs/mydata" 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany #  must be the same as PersistentVolume
  resources:
    requests:
      storage: 1Gi          
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment    
  labels:
    name: mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  replicas: 1
  template:
    metadata:
      labels: 
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        -  containerPort: 27017
        ... # omitted some parts for easy reading
        volumeMounts:
        - name: data  
          mountPath: /data/db
      volumes: 
        - name: data
          persistentVolumeClaim: 
            claimName: task-pv-claim    
  1. mongo-stateful.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <nfs-server-ip>
    path: "/srv/nfs/mydata" 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-statefulset
spec:
  selector:
    matchLabels:
      name: mongodb-statefulset
  serviceName: mongodb-statefulset
  replicas: 2
  template:
    metadata:
      labels:
        name: mongodb-statefulset
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mongodb
        image: mongo:3.6.4
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: db-data
          mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: db-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "manual"
      resources:
        requests:
          storage: 2Gi

But this is not working ( mongo-stateful.yaml ) pods are in pending state as when I describe it shows:但这不起作用( mongo-stateful.yaml )pods 处于pending状态,正如我描述的那样:

default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pod has unbound immediate PersistentVolumeClaims default-scheduler 0/3 节点可用:1 个节点有污点 {node-role.kubernetes.io/master:},pod 不能容忍,2 个 pod 具有未绑定的直接 PersistentVolumeClaims

PS: Deployment works fine without any errors, problem is with Statefulset PS:部署工作正常,没有任何错误,问题出在 Statefulset

Can someone please help me, how to write a statefulset with volumes?有人可以帮助我,如何编写带有卷的状态集?

If your storage class does not support dynamic volume provisionning, you have to manually create PVs and associated PVCs , using yaml files, then the volumeClaimTemplates will allow to link existing PVCs with your statefulset's pods.如果您的存储类不支持动态卷配置,您必须使用 yaml 文件手动创建 PV 和关联的 PVC ,然后 volumeClaimTemplates 将允许将现有 PVC 与您的 statefulset 的 pod 链接。

Here is a working example: https://github.com/k8s-school/k8s-school/blob/master/examples/MONGODB-install.sh这是一个工作示例: https : //github.com/k8s-school/k8s-school/blob/master/examples/MONGODB-install.sh

You should:你应该:

  • run it locally on https://kind.sigs.k8s.io/ , which support dynamic volume provisionning, so here PVCs and PVs will be created automaticallyhttps://kind.sigs.k8s.io/本地运行,支持动态卷配置,所以这里会自动创建 PVC 和 PV
  • export PVs and PVCs yaml files导出 PV 和 PVC yaml 文件
  • use these yaml file as template to create your PVs and PVCs for your NFS backend.使用这些 yaml 文件作为模板为 NFS 后端创建 PV 和 PVC。

Here is what you will get on Kind:以下是您将在 Kind 上获得的内容:

$ ./MONGODB-install.sh               
+ kubectl apply -f 13-12-mongo-configmap.yaml
configmap/mongo-init created
+ kubectl apply -f 13-11-mongo-service.yaml
service/mongo created
+ kubectl apply -f 13-14-mongo-pvc.yaml
statefulset.apps/mongo created
$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
mongo-0   2/2     Running   0          8m38s
mongo-1   2/2     Running   0          5m58s
mongo-2   2/2     Running   0          5m45s
$ kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
database-mongo-0   Bound    pvc-05247511-096e-4af5-8944-17e0d8222512   1Gi        RWO            standard       8m42s
database-mongo-1   Bound    pvc-f53c35a4-6fc0-4b18-b5fc-d7646815c0dd   1Gi        RWO            standard       6m2s
database-mongo-2   Bound    pvc-2a711892-eeee-4481-94b7-6b46bf5b76a7   1Gi        RWO            standard       5m49s
$ kubectl get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
pvc-05247511-096e-4af5-8944-17e0d8222512   1Gi        RWO            Delete           Bound    default/database-mongo-0   standard                8m40s
pvc-2a711892-eeee-4481-94b7-6b46bf5b76a7   1Gi        RWO            Delete           Bound    default/database-mongo-2   standard                5m47s
pvc-f53c35a4-6fc0-4b18-b5fc-d7646815c0dd   1Gi        RWO            Delete           Bound    default/database-mongo-1   standard                6m1s

And a dump of a PVC (generated here by volumeClaimTemplate because odf kind dynamic volume provisionning):以及 PVC 的转储(由volumeClaimTemplate在此处生成,因为 odf 类型的动态卷配置):

$ kubectl get pvc database-mongo-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
    volume.kubernetes.io/selected-node: kind-worker2
  creationTimestamp: "2020-10-16T15:05:20Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: mongo
  managedFields:
    ...
  name: database-mongo-0
  namespace: default
  resourceVersion: "2259"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/database-mongo-0
  uid: 05247511-096e-4af5-8944-17e0d8222512
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard
  volumeMode: Filesystem
  volumeName: pvc-05247511-096e-4af5-8944-17e0d8222512
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

And the related PV:以及相关的PV:

kubectl get pv pvc-05247511-096e-4af5-8944-17e0d8222512 -o yaml     
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: rancher.io/local-path
  creationTimestamp: "2020-10-16T15:05:23Z"
  finalizers:
  - kubernetes.io/pv-protection
  managedFields:
    ...
  name: pvc-05247511-096e-4af5-8944-17e0d8222512
  resourceVersion: "2256"
  selfLink: /api/v1/persistentvolumes/pvc-05247511-096e-4af5-8944-17e0d8222512
  uid: 3d1e894e-0924-411a-8378-338e48ba4a28
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: database-mongo-0
    namespace: default
    resourceVersion: "2238"
    uid: 05247511-096e-4af5-8944-17e0d8222512
  hostPath:
    path: /var/local-path-provisioner/pvc-05247511-096e-4af5-8944-17e0d8222512_default_database-mongo-0
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kind-worker2
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
  volumeMode: Filesystem
status:
  phase: Bound

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM