[英]CrashLoopBackOff while increasing replicas count more than 1 on Azure AKS cluster for MongoDB image
Click here to get error screen 单击此处获取错误屏幕
I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume & persistent volume claim). 我正在将Azure文件共享作为卷部署MongoDb到Azure AKS(使用持久卷和持久卷声明)。 If I am increasing replicas more than one then CrashLoopBackOff is occurring. 如果我正在增加多个副本,那么CrashLoopBackOff正在发生。 Only one Pod is getting created, other are getting failed. 只创建了一个Pod,其他的则失败了。
My Docker file to Create MongoDb image. 我的Docker文件创建了MongoDb图像。
FROM ubuntu:16.04
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]
YAML file for Deployment 用于部署的YAML文件
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mongo
labels:
name: mongo
spec:
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: <my image of mongodb>
ports:
- containerPort: 27017
protocol: TCP
name: mongo
volumeMounts:
- mountPath: /data/db
name: az-files-mongo-storage
volumes:
- name: az-files-mongo-storage
persistentVolumeClaim:
claimName: mong-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
For your issue, you can take a look at another issue for the same error. 对于您的问题,您可以查看同一错误的其他问题 。 So it seems you cannot initialize the same volume when another has already done it for mongo. 因此,当另一个已经为mongo做了它时,你似乎无法初始化相同的音量。 From the error, I will suggest you just use the volume to store the data. 从错误中,我建议您只使用该卷来存储数据。 You can initialize in the Dockerfile when creating the image. 您可以在创建映像时在Dockerfile中初始化。 Or you can use the create volumes for every pod through the StatefulSets and it's more recommended. 或者您可以通过StatefulSets为每个pod使用创建卷,并且更推荐它。
Update: 更新:
The yam file below will work for you: 下面的山药文件适合您:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: charlesacr.azurecr.io/mongodb:v1
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: az-files-mongo-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: az-files-mongo-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: az-files-mongo-storage
resources:
requests:
storage: 5Gi
And you need to create the StorageClass before you create the statefulSets. 您需要在创建statefulSets之前创建StorageClass。 The yam file below: 山药文件如下:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: az-files-mongo-storage
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
Then the pods work well and the screenshot below: 然后pods运行良好,截图如下:
You can configure accessModes: - ReadWriteMany
. 您可以配置accessModes: - ReadWriteMany
。 But still, the volume or storage type should support this mode. 但是,卷或存储类型应该支持此模式。 Find the table here 在这里找到表格
According to that table, AzureFile supports ReadWriteMany but not AzureDisk. 根据该表,AzureFile支持ReadWriteMany但不支持AzureDisk。
you should be using StatefulSets for mongodb. 你应该使用StatefulSets for mongodb。 deployments are for stateless services. 部署用于无状态服务。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.