简体   繁体   中英

pod has unbound immediate PersistentVolumeClaims, I deployed milvus server using

etcd: enabled: true name: etcd replicaCount: 3 pdb: create: false image: repository: "milvusdb/etcd" tag: "3.5.0-r7" pullPolicy: IfNotPresent

service: type: ClusterIP port: 2379 peerPort: 2380

auth: rbac: enabled: false

persistence: enabled: true storageClass: accessMode: ReadWriteOnce size: 10Gi

Enable auto compaction

compaction by every 1000 revision

autoCompactionMode: revision autoCompactionRetention: "1000"

Increase default quota to 4G

extraEnvVars:

  • name: ETCD_QUOTA_BACKEND_BYTES value: "4294967296"
  • name: ETCD_HEARTBEAT_INTERVAL value: "500"
  • name: ETCD_ELECTION_TIMEOUT enter code here value: "2500"

Configuration values for the pulsar dependency

ref: https://github.com/apache/pulsar-helm-chart

enter image description here

I am trying to run the milvus cluster using kubernete in ubuntu server. I used helm menifest https://milvus-io.github.io/milvus-helm/

Values.yaml https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml

I checked PersistentValumeClaim their was an error no persistent volumes available for this claim and no storage class is set

This error comes because you dont have a Persistent Volume. A pvc needs aa pv with at least the same capacity of the pvc.

This can be done manually or with a Volume provvisioner.

The most easy way someone would say is to use the local storageClass, which uses the diskspace from the node where the pod is instanciated, adds a pod affinity so that the pod starts allways on the same node and can use the volume on that disk. In your case you are using 3 replicas. Allthough its possible to start all 3 instances on the same node, this is mostlikly not what you want to achieve with Kubernetes. If that node breaks you wont have any other instance running on another node.

You need first to thing about the infrastructure of your cluster. Where should the data of the volumes be stored?

An Network File System, nfs, might be a could solution. In this case you have an nfs somewhere in your infrastructure and all the nodes can reach it.

So you can create a PV which is accessible from all your node.

To not allocate a PV always manualy you can install a Volumeprovisioner inside your cluster.

I use in some cluster this one here: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

As i said you must have already an nfs and configure the provvisioner.yaml with the path.

it looks like this:

# patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nfs-client-provisioner
  name: nfs-client-provisioner
spec:
  template:
    spec:
      containers:
        - name: nfs-client-provisioner
          env:
            - name: NFS_SERVER
              value: <YOUR_NFS_SERVER_IP>
            - name: NFS_PATH
              value: <YOUR_NFS_SERVER_SHARE>
      volumes:
        - name: nfs-client-root
          nfs:
            server: <YOUR_NFS_SERVER_IP>
            path: <YOUR_NFS_SERVER_SHARE>

If you use an nfs without provvisioner, you need to define a storageClass which is linked to your nfs.

There are a lot of solutions to hold persitent volumes.

Here you can find a list of StorageClasses:

https://kubernetes.io/docs/concepts/storage/storage-classes/

At the end it depends also where your cluster is provvisioned if you are not managing it by yourself.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM