简体   繁体   中英

Kubernetes / Rancher 2, mongo-replicaset with Local Storage Volume deployment

I try, I try, but Rancher 2.1 fails to deploy the " mongo-replicaset " Catalog App, with Local Persistent Volumes configured.

How to correctly deploy a mongo-replicaset with Local Storage Volume? Any debugging techniques appreciated since I am new to rancher 2.

I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here .

Note : Deployment without Local Persistent Volumes succeed .

Note : Deployment with Local Persistent Volume and with the "mongo" image succeed (without replicaset version).

Note : Deployment with both mongo-replicaset and with Local Persistent Volume fails .


Step A - Cluster

Create a rancher instance, and:

  1. Add three nodes: a worker, a worker etcd, a worker control plane
  2. Add a label on each node: name one, name two and name three for node Affinity

Step B - Storage class

Create a storage class with these parameters:

  1. volumeBindingMode : WaitForFirstConsumer saw here
  2. name : local-storage

Step C - Persistent Volumes

Add 3 persistent volumes like this:

  1. type : local node path
  2. Access Mode: Single Node RW, 12Gi
  3. storage class: local-storage
  4. Node Affinity: name one (two for second volume, three for third volume)

Step D - Mongo-replicaset Deployment

From catalog, select Mongo-replicaset and configure it like that:

  1. replicaSetName: rs0
  2. persistentVolume.enabled: true
  3. persistentVolume.size: 12Gi
  4. persistentVolume.storageClass: local-storage

Result

After doing ABCD steps, the newly created mongo-replicaset app stay infinitely in "Initializing" state.

mongo 状态在初始化时停止

The associated mongo workload contain only one pod, instead of three. And this pod has two 'crashed' containers, bootstrap and mongo-replicaset.

只有一个 Pod 导致工作负载崩溃


Logs

This is the output from the 4 containers of the only running pod. There is no error, no problem.

mongo 容器中没有日志在复制配置容器中几乎没有日志安装容器中几乎没有日志来自引导容器的一些日志

I can't figure out what's wrong with this configuration, and I don't have any tools or techniques to analyze the problem. Detailed configuration can be found here . Please ask me for more commands results.

Thanks you

All this configuration is correct.

It's missing a detail since Rancher is a containerized deployment of kubernetes. Kubelets are deployed on each node in docker containers. They don't access to OS local folders.

It's needed to add a volume binding for the kubelets, like that K8s will be able to create the mongo pod with this same binding.

In rancher: Edit the cluster yaml (Cluster > Edit > Edit as Yaml)

Add the following entry under "services" node:

  kubelet: 
    extra_binds: 
      - "/mongo:/mongo:rshared"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM