简体   繁体   中英

How do you set-up Mongo replica set on Kubernetes?

I'd like to set-up a Mongo replica set on Kubernetes. I'd like to have three replicas. This means I'd need to start 3 instances.

Should I start three pods, with Mongo in each one, and use the service the point to the primary? Or should I used a replication controller somehow?

This answer is out of date. I wrote a detailed step-by-step tutorial here using more up to date methods. I highly recommend reading it all.

In a nutshell, you run a sidecar app to configure the replica set for you, and either use a service per instance or ping the K8s API for the pod IP addresses.

Example: This will only work in Google Cloud. You will need to make modifications for other platforms, particularly around the volumes:

  1. Follow the example in https://github.com/leportlabs/mongo-k8s-sidecar.git
    • git clone https://github.com/leportlabs/mongo-k8s-sidecar.git
    • cd mongo-k8s-sidecar/example/
    • make add-replica ENV=GoogleCloudPlatform (do this three times)
  2. Connect to the replica set via services.
    • mongodb://mongo-1,mongo-2,mongo-3:27017/dbname_?
  3. You can also use the raw pod IP addresses instead of creating a service per pod

Typically, to set up a clustered set of nodes like mongo with replicas sets, you would create a Service that tracks the pods under the service name (so for example, create a MongoDB replication controller with a tag mongodb , and a Service tracking those instances) The Service can then be queried for its members (using the API server, you can look up the nodes with

curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces/default/endpoints/mongodb

where mongodb is your selector on the name of the service.

that returns a JSON object with a bunch of fields, so a good way to parse these easily is to use jq https://stedolan.github.io/jq/

piping the curl command into a jq query like

jq '.subsets[].addresses[]' | jq '{ip: .ip, host:.targetRef.name}' jq '.subsets[].addresses[]' | jq '{ip: .ip, host:.targetRef.name}' will return the IP and hostnames of the mongodb instances in your cluster.

So now you know who is in the cluster and you can create the replica set in your init script. Obviously here that means you need to start the Service first, your startup script needs to wait for all the nodes to be up and registered with the service, and then you can proceed. If you use one image, with one script, it will run n each node, so you need to check that the replica set does not exists already or handle errors. The first pod to register should do the work. Another option is to run all nodes as single nodes, then run a separate bootstrapping script that will create the replica set.

Finally, then you call the mongodb cluster, you will need to make sure you specify the url with replica set name as an option:

mongodb://mongodb:27017/database?replicaSet=replicaSetName

Since you don't know the IP of the master, you would call it through the service mongodb which will load balance the requests to one of the nodes, and if you don't specify the replica set name, you will end up with connection errors as only the master can get write requests.

Obviously this is not a step by step tutorial, but i hope that gets you started.

This is the example I'm currently running.

apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongo
  name: mongo-svc1
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    type: mongo-rs-A
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongo
  name: mongo-svc2
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    type: mongo-rs-B
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongo
  name: mongo-svc3
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    type: mongo-rs-C
---

apiVersion: v1
kind: ReplicationController

metadata:
  name: mongo

spec:
  replicas: 1
  selector:
    name: mongo-nodea
    role: mongo
    environment: test

  template:
    metadata:
      labels:
        name: mongo-nodea
        role: mongo
        environment: test
        type: mongo-rs-A
    spec:
      containers:
        - name: mongo-nodea
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rsABC
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          flocker:
            datasetName: FlockerMongoVolSetA
---
apiVersion: v1
kind: ReplicationController

metadata:
  name: mongo-1

spec:
  replicas: 1
  selector:
    name: mongo-nodeb
    role: mongo
    environment: test

  template:
    metadata:
      labels:
        name: mongo-nodeb
        role: mongo
        environment: test
        type: mongo-rs-B
    spec:
      containers:
        - name: mongo-nodeb
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rsABC
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          flocker:
            datasetName: FlockerMongoVolSetB
---
apiVersion: v1
kind: ReplicationController

metadata:
  name: mongo-2

spec:
  replicas: 1
  selector:
    name: mongo-nodec
    role: mongo
    environment: test

  template:
    metadata:
      labels:
        name: mongo-nodec
        role: mongo
        environment: test
        type: mongo-rs-C
    spec:
      containers:
        - name: mongo-nodec
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rsABC
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          flocker:
            datasetName: FlockerMongoVolSetC


kubectl --kubeconfig=clusters/k8s-mongo/kubeconfig get po,svc -L type,role,name
NAME            READY     STATUS    RESTARTS   AGE       TYPE         ROLE      NAME
mongo-1-39nuw   1/1       Running   0          1m        mongo-rs-B   mongo     mongo-nodeb
mongo-2-4tgho   1/1       Running   0          1m        mongo-rs-C   mongo     mongo-nodec
mongo-rk9n8     1/1       Running   0          1m        mongo-rs-A   mongo     mongo-nodea
NAME         CLUSTER_IP   EXTERNAL_IP   PORT(S)     SELECTOR          AGE       TYPE      ROLE      NAME
kubernetes   10.3.0.1     <none>        443/TCP     <none>            21h       <none>    <none>    <none>
mongo-svc1   10.3.0.28    <none>        27017/TCP   type=mongo-rs-A   1m        <none>    <none>    mongo
mongo-svc2   10.3.0.56    <none>        27017/TCP   type=mongo-rs-B   1m        <none>    <none>    mongo
mongo-svc3   10.3.0.47    <none>        27017/TCP   type=mongo-rs-C   1m        <none>    <none>    mongo

On the Primary node I am going into mongo shell

rs.status() rs.initiate() rs.add("10.3.0.56:27017")

I'm currently running into this issue where I'm stuck in Secondary and Startup statuses for the two nodes without a primary.

rs.status()
{
    "set" : "rsABC",
    "date" : ISODate("2016-01-21T22:51:33.216Z"),
    "myState" : 2,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "members" : [
        {
            "_id" : 0,
            "name" : "mongo-rk9n8:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 242,
            "optime" : {
                "ts" : Timestamp(1453416638, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2016-01-21T22:50:38Z"),
            "infoMessage" : "could not find member to sync from",
            "configVersion" : 2,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "10.3.0.56:27017",
            "health" : 1,
            "state" : 0,
            "stateStr" : "STARTUP",
            "uptime" : 45,
            "optime" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2016-01-21T22:51:28.639Z"),
            "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
            "pingMs" : NumberLong(40),
            "configVersion" : -2
        }
    ],
    "ok" : 1
}

Have a look here at the link below. In kubernetes, create the service addresses then the controllers and the replicaset initiation can be easily generated.... https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes

@Stephen Nguyen

I just copy your case and create namespace test for it(I change your yaml file accordingly), and initialize my mongo rs by :

rs.initiate({
     "_id" : "rsABC",
     "members" : [
          {
               "_id" : 0,
               "host" : "mongo-svc1.test:27017",
               "priority" : 10
          },
          {
               "_id" : 1,
               "host" : "mongo-svc2.test:27017",
               "priority" : 9
          },
          {
               "_id" : 2,
               "host" : "mongo-svc3.test:27017",
                "arbiterOnly" : true
          }
     ]
})

It seems it does work:

> rs.status()
{
        "set" : "rsABC",
        "date" : ISODate("2016-05-10T07:45:25.975Z"),
        "myState" : 2,
        "term" : NumberLong(2),
        "syncingTo" : "mongo-svc1.test:27017",
        "heartbeatIntervalMillis" : NumberLong(2000),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mongo-svc1.test:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 657,
                        "optime" : {
                                "ts" : Timestamp(1462865715, 2),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2016-05-10T07:35:15Z"),
                        "lastHeartbeat" : ISODate("2016-05-10T07:45:25.551Z"),
                        "lastHeartbeatRecv" : ISODate("2016-05-10T07:45:25.388Z"),
                        "pingMs" : NumberLong(0),
                        "electionTime" : Timestamp(1462865715, 1),
                        "electionDate" : ISODate("2016-05-10T07:35:15Z"),
                        "configVersion" : 1
                },
                {
                        "_id" : 1,
                        "name" : "mongo-svc2.test:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1171,
                        "optime" : {
                                "ts" : Timestamp(1462865715, 2),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2016-05-10T07:35:15Z"),
                        "syncingTo" : "mongo-svc1.test:27017",
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "mongo-svc3.test:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 657,
                        "lastHeartbeat" : ISODate("2016-05-10T07:45:25.549Z"),
                        "lastHeartbeatRecv" : ISODate("2016-05-10T07:45:23.969Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

I add mongo node by the service name.

Just as a heads-up. Don't use the mongo-k8s-sidecar approach in Production as it has potentially dangerous consequences. For a more up to date approach to using MongoDB with k8s StatefulSets, see:

  1. Deploying a MongoDB Replica Set as a Kubernetes StatefulSet
  2. Configuring Some Key Production Settings for MongoDB on Kubernetes
  3. Using the Enterprise Version of MongoDB on Kubernetes
  4. Deploying a MongoDB Sharded Cluster using Kubernetes StatefulSets

More information about MongoDB & Kubernetes is available at: http://k8smongodb.net/

I'm using this as a solution. Its NOT production ready yet.

Setup MongoDB Replication

Get all the MongoDB pod IP kubectl describe pod <PODNAME> | grep IP | sed -E 's/IP:[[:space:]]+//' kubectl describe pod <PODNAME> | grep IP | sed -E 's/IP:[[:space:]]+//'

and...

Run kubectl exec -i <POD_1_NAME> mongo

and ...

rs.initiate({ 
     "_id" : "cloudboost", 
     "version":1,
     "members" : [ 
          {
               "_id" : 0,
               "host" : "<POD_1_IP>:27017",
               "priority" : 10
          },
          {
               "_id" : 1,
               "host" : "<POD_2_IP>:27017",
               "priority" : 9
          },
          {
               "_id" : 2,
               "host" : "<POD_3_IP>:27017",
               "arbiterOnly" : true
          }
     ]
});

For Example :

rs.initiate({  
     "_id" : "cloudboost",
     "version":1,
     "members" : [ 
          {
               "_id" : 0,
               "host" : "10.244.1.5:27017",
               "priority" : 10
          },
          {
               "_id" : 1,
               "host" : "10.244.2.6:27017",
               "priority" : 9
          },
          {
               "_id" : 2,
               "host" : "10.244.3.5:27017",
               "arbiterOnly" : true
          }
     ]
}); 

Please Note : IP's can be different for your cluster.

TODO : Create a headless service to discover the nodes automatically and initialize a replicaset.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM