[英]Kubernetes MongoDB pods with NFS persistent volume provisioning
I have a kubernetes
cluster and I have set up an NFS
server as persistent volume for a mongodb
deployment.我有一个kubernetes
集群,我已经设置了一个NFS
服务器作为mongodb
部署的持久卷。
And I have set the PeristentVolume
and PersistentVolumeClaim
as below:我已经将PeristentVolume
和PersistentVolumeClaim
设置为如下:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
name: mynfs
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: <nfs-server-ip>
path: "/srv/nfs/mydata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Everything works fine but the only problem is, I can't run more than 1 mongodb
pods because I get the following error.一切正常,但唯一的问题是,我无法运行超过 1 个mongodb
pod,因为出现以下错误。
{"t":{"$date":"2020-10-15T15:16:39.140+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}} {"t":{"$date":"2020-10-15T15:16:39.140+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"initAndListen 中的 DBException,正在终止","attr":{"error":"DBPathInUse: 无法锁定锁定文件:/data/db/mongod.lock (Resource暂时不可用。另一个 mongod 实例已经在 /data/db 目录上运行"}}
That pod is always in CrashLoopBackOff
and restarts and again to the same status.该 Pod 始终处于CrashLoopBackOff
并重新启动并再次恢复到相同状态。
I think the problem here is the same volume path mentioned in the mongodb
deployment is trying to access by the two pods at the same time and when one pod is already have the lock, other pod failed.我认为这里的问题是mongodb
部署中提到的同一个卷路径试图同时被两个 Pod 访问,当一个 Pod 已经锁定时,另一个 Pod 失败。
Here's the mongodb
deployment yaml.这是mongodb
部署yaml。
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
name: mongodb
spec:
replicas: 2
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data/db
volumes:
- name: data
persistentVolumeClaim:
claimName: task-pv-claim
can someone please help me fix this?有人可以帮我解决这个问题吗?
Thank you.谢谢你。
This log entry already tells you what is the issue此日志条目已经告诉您问题所在
{"t":{"$date":"2020-10-15T15:16:39.140+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}} {"t":{"$date":"2020-10-15T15:16:39.140+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"initAndListen 中的 DBException,正在终止","attr":{"error":"DBPathInUse: 无法锁定锁定文件:/data/db/mongod.lock (Resource暂时不可用。另一个 mongod 实例已经在 /data/db 目录上运行"}}
All members access the same volume and data.所有成员访问相同的卷和数据。
AFAIK you cannot have multiple instances of MongoDB pointing to the same path, each MongoDB instance needs to have exclusive access to its own data files. AFAIK 不能让多个 MongoDB 实例指向同一路径,每个 MongoDB 实例都需要独占访问自己的数据文件。
You can run your application as StatefulSet with volumeClaimTemplate which ensures that each replica will mount its own volume.您可以将应用程序作为StatefulSet与volumeClaimTemplate一起运行,以确保每个副本都将挂载自己的卷。 There is great answer about that.对此有很好的答案。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.