简体   繁体   中英

RabbitMQ Install - pod has unbound immediate PersistentVolumeClaims

I am trying to do a install of RabbitMQ in Kubernetes and following the entry on the RabbitMQ site https://www.rabbitmq.com/blog/2020/08/10/deploying-rabbitmq-to-kubernetes-whats-involved/ .

Please note my CentOS 7 and Kubernetes 1.18. Also, I am not even sure this is the best way to deploy RabbitMQ, its the best documentation I could find though. I did find something that said that volumeClaimTemplates does not support NFS so I am wondering if that is the issue.

I have added the my Persistent Volume using NFS:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: rabbitmq-nfs-pv
  namespace: ninegold-rabbitmq
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  nfs:
    path: /var/nfsshare
    server: 192.168.1.241
  persistentVolumeReclaimPolicy: Retain

It created the PV correctly.

[admin@centos-controller ~]$ kubectl get pv -n ninegold-rabbitmq
NAME                                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                     STORAGECLASS   REASON   AGE
ninegold-platform-custom-config-br        1Gi        RWX            Retain           Bound       ninegold-platform/ninegold-db-pgbr-repo                           22d
ninegold-platform-custom-config-pgadmin   1Gi        RWX            Retain           Bound       ninegold-platform/ninegold-db-pgadmin                             21d
ninegold-platform-custom-config-pgdata    1Gi        RWX            Retain           Bound       ninegold-platform/ninegold-db                                     22d
rabbitmq-nfs-pv                           5Gi        RWO            Retain           Available                                                                     14h

I then add my StatefulSet.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: ninegold-rabbitmq
spec:
  selector:
    matchLabels:
      app: "rabbitmq"
  # headless service that gives network identity to the RMQ nodes, and enables them to cluster
  serviceName: rabbitmq-headless # serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller.
  volumeClaimTemplates:
  - metadata:
      name: rabbitmq-data
      namespace: ninegold-rabbitmq
    spec:
      storageClassName: local-storage
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: "5Gi"
  template:
    metadata:
      name: rabbitmq
      namespace: ninegold-rabbitmq
      labels:
        app: rabbitmq
    spec:
      initContainers:
      # Since k8s 1.9.4, config maps mount read-only volumes. Since the Docker image also writes to the config file,
      # the file must be mounted as read-write. We use init containers to copy from the config map read-only
      # path, to a read-write path
      - name: "rabbitmq-config"
        image: busybox:1.32.0
        volumeMounts:
        - name: rabbitmq-config
          mountPath: /tmp/rabbitmq
        - name: rabbitmq-config-rw
          mountPath: /etc/rabbitmq
        command:
        - sh
        - -c
        # the newline is needed since the Docker image entrypoint scripts appends to the config file
        - cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
          cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins
      volumes:
      - name: rabbitmq-config
        configMap:
          name: rabbitmq-config
          optional: false
          items:
          - key: enabled_plugins
            path: "enabled_plugins"
          - key: rabbitmq.conf
            path: "rabbitmq.conf"
      # read-write volume into which to copy the rabbitmq.conf and enabled_plugins files
      # this is needed since the docker image writes to the rabbitmq.conf file
      # and Kubernetes Config Maps are mounted as read-only since Kubernetes 1.9.4
      - name: rabbitmq-config-rw
        emptyDir: {}
      - name: rabbitmq-data
        persistentVolumeClaim:
          claimName: rabbitmq-data
      serviceAccount: rabbitmq
      # The Docker image runs as the `rabbitmq` user with uid 999 
      # and writes to the `rabbitmq.conf` file
      # The security context is needed since the image needs
      # permission to write to this file. Without the security 
      # context, `rabbitmq.conf` is owned by root and inaccessible
      # by the `rabbitmq` user
      securityContext:
        fsGroup: 999
        runAsUser: 999
        runAsGroup: 999
      containers:
      - name: rabbitmq
        # Community Docker Image
        image: rabbitmq:latest
        volumeMounts:
        # mounting rabbitmq.conf and enabled_plugins
        # this should have writeable access, this might be a problem
        - name: rabbitmq-config-rw
          mountPath: "/etc/rabbitmq"
          # mountPath: "/etc/rabbitmq/conf.d/"
          mountPath: "/var/lib/rabbitmq"
        # rabbitmq data directory
        - name: rabbitmq-data
          mountPath: "/var/lib/rabbitmq/mnesia"
        env:
        - name: RABBITMQ_DEFAULT_PASS
          valueFrom:
            secretKeyRef:
              name: rabbitmq-admin
              key: pass
        - name: RABBITMQ_DEFAULT_USER
          valueFrom:
            secretKeyRef:
              name: rabbitmq-admin
              key: user
        - name: RABBITMQ_ERLANG_COOKIE
          valueFrom:
            secretKeyRef:
              name: erlang-cookie
              key: cookie
        ports:
        - name: amqp
          containerPort: 5672
          protocol: TCP
        - name: management
          containerPort: 15672
          protocol: TCP
        - name: prometheus
          containerPort: 15692
          protocol: TCP
        - name: epmd
          containerPort: 4369
          protocol: TCP
        livenessProbe:
          exec:
            # This is just an example. There is no "one true health check" but rather
            # several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
            # and intrusive health checks.
            # Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
            #
            # Stage 2 check:
            command: ["rabbitmq-diagnostics", "status"]
          initialDelaySeconds: 60
          # See https://www.rabbitmq.com/monitoring.html for monitoring frequency recommendations.
          periodSeconds: 60
          timeoutSeconds: 15
        readinessProbe: # probe to know when RMQ is ready to accept traffic
          exec:
            # This is just an example. There is no "one true health check" but rather
            # several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
            # and intrusive health checks.
            # Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
            #
            # Stage 1 check:
            command: ["rabbitmq-diagnostics", "ping"]
          initialDelaySeconds: 20
          periodSeconds: 60
          timeoutSeconds: 10

However my stateful set is not binding, I am getting the following error:

running "VolumeBinding" filter plugin for pod "rabbitmq-0": pod has unbound immediate PersistentVolumeClaims

The PVC did not correctly bind to the PV but stays in pending state.

[admin@centos-controller ~]$ kubectl get pvc -n ninegold-rabbitmq
NAME                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
rabbitmq-data-rabbitmq-0   Pending                                      local-storage   14h

I have double checked the capacity, accessModes, I am not sure why this is not binding. My example came from here https://github.com/rabbitmq/diy-kubernetes-examples/tree/master/gke , the only changes I have done is to bind my NFS volume.

Any help would be appreciated.

In your YAMLs I found some misconfigurations.

  1. local-storage class.

I assume, you used Documentation example to create local-storage . There is information that:

Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling.

When you want to use volumeClaimTemplates , you will use Dynamic Provisioning . It's well explained in Medium article .

PV in StatefulSet

Specifically to the volume part, StatefulSet provides a key named as volumeClaimTemplates . With that, you can request the PVC from the storage class dynamically. As part of your new statefulset app definition, replace the volumes ... The PVC is named as volumeClaimTemplate name + pod-name + ordinal number .

As local-storage does not support dynamic provisioning , it will not work. You would need to use NFS storageclass with proper provisioner or create PV manually.

Also, when you are using volumeClaimTemplates for each pod it will create Pv and PVC . In addition PVC and PV are bounding in 1:1 relationship. For more details you can check this SO thread .

  1. Error unbound immediate PersistentVolumeClaims

It means that dynamic provisioning didn't work as expected. If you would check kubectl get pv,pvc you would not see any new PV , PVC with name: volumeClaimTemplate name + pod-name + ordinal number .

  1. claimName: rabbitmq-data

I assume, in this claim you wanted to mount it to PV created by volumeClaimTemplates but it was not created. Also PV would have name rabbitmq-data-rabbitmq-0 for first pod and rabbitmq-data-rabbitmq-1 for the second one.

As last part, this article - Kubernetes : NFS and Dynamic NFS provisioning might be helpful.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM