[英]Running mongo with persistent volume throws error - Kubernetes
我想創建一個mongodb有狀態部署,與/data/db
上的所有mongodb pod共享我主機的本地目錄/mnt/nfs/data/myproject/production/permastore/mogno
(網絡文件系統目錄)。 我正在三個VirtualMachines上運行我的kubernetes集群。
當我不使用持久卷聲明時,我可以毫無問題地啟動mongo! 但是,當我使用持久卷聲明啟動mongodb時,出現此錯誤。
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
有誰知道為什么/data/db
在具有永久卷的mountend時mongo無法啟動? 如何解決?
以下配置文件由於路徑不同而無法在您的環境中工作。 但是,您應該可以從我的設置中獲得靈感。
持久體積 pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-mongo
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /mnt/nfs/data/phenex/production/permastore/mongo
claimRef:
name: phenex-mongo
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
永久體積聲明 pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-mongo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
部署 deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-mongo
mountPath: /data/db
volumes:
- name: phenex-mongo
persistentVolumeClaim:
claimName: phenex-mongo
應用配置
$ kubectl apply -f pv.yaml
$ kubectl apply -f pc.yaml
$ kubectl apply -f deployment.yaml
檢查集群狀態
$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 38m mongo mongo:4.2.0-bionic run=mongo
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-59f669657d-fpkgv 1/1 Running 0 35m 10.44.0.2 web01 <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-mongo 1Gi RWO Retain Bound phenex/phenex-mongo manual 124m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-mongo Bound phenex-mongo 1Gi RWO manual 122m Filesystem
運行蒙哥豆莢
$ kubectl exec -it mongo-59f669657d-fpkgv mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-08-14T14:25:25.452+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2019-08-14T14:25:25.453+0000 F - [main] exception: connect failed
2019-08-14T14:25:25.453+0000 E - [main] exiting with code 1
command terminated with exit code 1
日志
$ kubectl logs mongo-59f669657d-fpkgv
2019-08-14T14:00:32.287+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-59f669657d-fpkgv
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] db version v4.2.0
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] modules: none
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] build environment:
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distmod: ubuntu1804
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distarch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] options: { net: { bindIp: "*" } }
root@mongo-59f669657d-fpkgv:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mongodb 1 0.0 2.7 208324 27920 ? Dsl 14:00 0:00 mongod --bind_ip_all
root 67 0.0 0.2 18496 2060 pts/1 Ss 15:12 0:00 bash
root 81 0.0 0.1 34388 1536 pts/1 R+ 15:13 0:00 ps aux
我找到了原因和解決方案! 在我的設置中,我使用NFS在網絡上共享目錄。 這樣,我所有的集群節點(小兵)都可以訪問位於/mnt/nfs/data/
公共目錄。
mongo
無法啟動的原因是由於持久卷無效。 即,我使用的是持久卷HostPath類型-適用於單節點測試,或者如果您在所有集群節點上手動創建目錄結構,例如/tmp/your_pod_data_dir/
。 但是,如果您嘗試將nfs目錄掛載為hostPath,則會導致出現問題-這樣的問題!
對於通過網絡文件系統共享的目錄,請使用NFS持久卷類型( NFS示例 )! 您將在下面找到我的設置和兩個解決方案。
/ etc / hosts-我的群集節點。
# Cluster nodes
192.168.123.130 master
192.168.123.131 web01
192.168.123.132 compute01
192.168.123.133 compute02
導出的NFS目錄列表 。
[vagrant@master]$ showmount -e
Export list for master:
/nfs/data compute*,web*
/nfs/www compute*,web*
該解決方案,顯示部署中掛載NFS目錄通過卷 -具備看看volumes
和volumeMounts
部分。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data/phenex/production/permastore/mongo
該解決方案顯示了通過卷聲明掛載nfs目錄的 部署 -看下persistentVolumeClaim
, 持久卷和持久卷聲明如下。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
persistentVolumeClaim:
claimName: phenex-nfs
持久卷 -NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data
claimRef:
name: phenex-nfs
persistentVolumeReclaimPolicy: Retain
持續體積索賠
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
# Checking cluster state
[vagrant@master ~]$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 18s mongo mongo:4.2.0-bionic run=mongo
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-65b7d6fb9f-mcmvj 1/1 Running 0 18s 10.44.0.2 web01 <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-nfs 1Gi RWO Retain Bound /phenex-nfs 27s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-nfs Bound phenex-nfs 1Gi RWO 27s Filesystem
# Attaching to pod and checking network bindings
[vagrant@master ~]$ kubectl exec -it mongo-65b7d6fb9f-mcmvj -- bash
root@mongo-65b7d6fb9f-mcmvj:/$ apt update
root@mongo-65b7d6fb9f-mcmvj:/$ apt install net-tools
root@mongo-65b7d6fb9f-mcmvj:/$ netstat -tunlp tcp 0 0 0.0.0.0:27017
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN -
# Running mongo clinet
root@mongo-65b7d6fb9f-mcmvj:/$ mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("45287a0e-7d41-4484-a267-5101bd20fad3") }
MongoDB server version: 4.2.0
Server has startup warnings:
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
>
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.