简体   繁体   中英

CrashLoopBackOff while increasing replicas count more than 1 on Azure AKS cluster for MongoDB image

Click here to get error screen

I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume & persistent volume claim). If I am increasing replicas more than one then CrashLoopBackOff is occurring. Only one Pod is getting created, other are getting failed.

My Docker file to Create MongoDb image.

FROM ubuntu:16.04

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list

RUN apt-get update && apt-get install -y mongodb-org

EXPOSE 27017

ENTRYPOINT ["/usr/bin/mongod"]

YAML file for Deployment

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: mongo
    spec:      
      containers:
      - name: mongo
        image: <my image of mongodb>
        ports:
        - containerPort: 27017
          protocol: TCP
          name: mongo 
        volumeMounts:
        - mountPath: /data/db
          name: az-files-mongo-storage
      volumes:
      - name: az-files-mongo-storage
        persistentVolumeClaim:
          claimName: mong-pvc
      ---
apiVersion: v1
kind: Service
metadata:
  name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    app: mongo

For your issue, you can take a look at another issue for the same error. So it seems you cannot initialize the same volume when another has already done it for mongo. From the error, I will suggest you just use the volume to store the data. You can initialize in the Dockerfile when creating the image. Or you can use the create volumes for every pod through the StatefulSets and it's more recommended.

Update:

The yam file below will work for you:

apiVersion: v1
kind: Service
metadata:
  name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      app: mongo 
  serviceName: mongo
  replicas: 3 
  template:
    metadata:
      labels:
        app: mongo 
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mongo
        image: charlesacr.azurecr.io/mongodb:v1
        ports:
        - containerPort: 27017
          name: mongo
        volumeMounts:
        - name: az-files-mongo-storage
          mountPath: /data/db
  volumeClaimTemplates:
    - metadata:
        name: az-files-mongo-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: az-files-mongo-storage
        resources:
          requests:
            storage: 5Gi

And you need to create the StorageClass before you create the statefulSets. The yam file below:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: az-files-mongo-storage
provisioner: kubernetes.io/azure-file
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1000
  - gid=1000
parameters:
  skuName: Standard_LRS

Then the pods work well and the screenshot below:

在此输入图像描述

You can configure accessModes: - ReadWriteMany . But still, the volume or storage type should support this mode. Find the table here

According to that table, AzureFile supports ReadWriteMany but not AzureDisk.

you should be using StatefulSets for mongodb. deployments are for stateless services.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM