简体   繁体   中英

pod has unbound immediate PersistentVolumeClaims ops manager

EDIT: SEE BELOW

I am new trying to build a local cluster with 2 physical machines with kubeadm. I am following this https://github.com/mongodb/mongodb-enterprise-kube.netes steps and everything is ok. At first i am installing kube.netes operator, but when i tried to install ops manager i am geting: 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims ops manager. the yaml i used to install ops manager is:

  ---
    apiVersion: mongodb.com/v1
    kind: MongoDBOpsManager
    metadata:
    
      name: opsmanager1
    
    spec:
    
      replicas: 2
    
      version: 4.2.0
    
      adminCredentials: mongo-db-admin1 # Should match metadata.name
    
                                               # in the Kubernetes secret
    
                                               # for the admin user
    
      externalConnectivity:
    
        type: NodePort
    
    
    
      applicationDatabase:
    
        members: 3
    
        version: 4.4.0
    
        persistent: true
    
        podSpec:
       
           persistence:
    
             single: 
    
               storage: 1Gi

i can't figure out what the problem is. I am at a testing phase, and my goal is to make a scaling mongo database. Thanks in advance

edit : i made a few changes.I created storage class like this:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    
      name: localstorage
    
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: Immediate
    reclaimPolicy: Delete
    allowVolumeExpansion: True
    
    ---
    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: mongo-01
      labels:
        type: local
    spec:
      storageClassName: localstorage
      capacity:
        storage: 2Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/home/master/mongo01"
    
    ---
    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: mongo-02
      labels:
        type: local
    spec:
      storageClassName: localstorage
      capacity:
        storage: 2Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/home/master/mongo02"

And now my yaml for ops manger is:

apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
 name: ops-manager-localmode
spec:
 replicas: 2
 version: 4.2.12
 adminCredentials: mongo-db-admin1
 externalConnectivity:
    type: NodePort
 
 statefulSet:
   spec:
     # the Persistent Volume Claim will be created for each Ops Manager Pod
     volumeClaimTemplates:
       - metadata:
           name: mongodb-versions
         spec:
           storageClassName: localstorage
           accessModes: [ "ReadWriteOnce" ]
           resources:
             requests:
               storage: 2Gi
     template:
       spec:
         containers:
           - name: mongodb-ops-manager
             volumeMounts:
               - name: mongodb-versions
                 # this is the directory in each Pod where all MongoDB
                 # archives must be put
                 mountPath: /mongodb-ops-manager/mongodb-releases

 backup:
   enabled: false

 applicationDatabase:
   members: 3
   version: 4.4.0

   persistent: true

But i get a new error: Warning ProvisioningFailed 44s (x26 over 6m53s) persistentvolume-controller no volume plugin matched name: kube.netes.io/no-provisioner

At a quick glance, it looks like you don't have any volume that can create a PVC on your cluster. seehttps://v1-15.docs.kube.netes.io/docs/concepts/storage/volumes/ Your app needs to create a persistant volume, but your cluster doesn't know how to do that.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM