Hi Techrunnr, this document is related dockerising mongodb with replication setup and HA

  • HA (High availability) refers to improvement of system and app availability by minimising the down time for usual maintenance of system and as well as application and system crash (unplanned). This post specifically talk about the mongoDB replication setup and HA with docker, this is a master-salve architecture
  • Master: The master node can both read and write data. When handling modified data, the op-log will synchronise the updates to all the connected slave nodes.
  • Slave: The slave node can only read data, but not write data. It automatically synchronises data from the master node.
  • mongoDB replication setup with 3 nodes, please go through this link: https://www.techrunnr.com/how-to-configure-mongodb-replication/
  • prerequisites 3 nodes, docker running on 3 nodes
  • Here our 3 nodes are ubuntuserver1, ubuntuserver2, sharding1, Initaially ubunutuserver1 is acting as primary, server2, sharding1 are acting as secondaries
  • start the docker service on 3 nodes
    systemctl start docker 
  • Initial the docker swarm on the server1 and join the other 2 nodes to this swarm, here we are promoting 3 nodes as docker swarm masters
    docker swarm init
  • above command will give you to generate master join toke to add other nodes
    docker swarm join-token manager

    it will generate a token, execute this to slave nodes, so that they can participate in the cluster

  • Now we need to create a docker overlay network, here our overlay network name is techrunnr
    docker network create --driver overlay techrunnr

  • Usually to initialise replication, we use below command : rs.initiate(), rs.add(), but here to execute this commands we will a file called mongorc.js. Copied to docker file /etc/ directory
  • we need to enable replication setup in the mongod.conf file also. So we will edit monogd.conf.orig e will create mongorc.js file and copy to our docker file
  • mongod.conf.orig
    # mongod.conf
    
    # for documentation of all options, see:
    #   http://docs.mongodb.org/manual/reference/configuration-options/
    
    # Where and how to store data.
    storage:
      dbPath: /var/lib/mongodb
      journal:
        enabled: true
    #  engine:
    #  mmapv1:
    #  wiredTiger:
    
    # where to write logging data.
    systemLog:
      destination: file
      logAppend: true
      path: /var/log/mongodb/mongod.log
    
    # network interfaces
    net:
      port: 27017
      bindIp: 0.0.0.0
    
    
    # how the process runs
    processManagement:
      timeZoneInfo: /usr/share/zoneinfo
    
    #security:
    
    #operationProfiling:
    
    replication:
      replSetName: "techrunnr"
    #sharding:
    
    ## Enterprise-Only Options:
    
    #auditLog:
    
    #snmp:
  • In replica setup we create 3 docker services called, techrunnr-1, techrunnr-2, techrunnr-3, so we will mention these services name into the mongorc.js file
    rs.initiate( {
       _id : "techrunnr",
       members: [
          { _id: 0, host: "techrunnr-1" },
          { _id: 1, host: "techrunnr-2" },
          { _id: 2, host: "techrunnr-3" }
       ]
    });
    
    rs.slaveOk();
    
    cfg = rs.conf()
    cfg.members[0].priority = 1
    cfg.members[1].priority = 1
    cfg.members[2].priority = 1
    
    rs.reconfig(cfg)
  • Now we will build our docker image with our docker file, please go through the below docker file
    FROM mongo
    MAINTAINER Techrunnr <admin@techrunnr.com>
    RUN rm /etc/mongod.conf.orig
    COPY mongod.conf.orig /etc/
    COPY mongorc.js /etc/
    RUN chmod 700 /etc/mongorc.js
    RUN chmod 700 /etc/mongod.conf.orig
    EXPOSE 27017
  • Build the docker image using below command
    docker build -t <image_name> <dockerfile path>

  • you can pull this image from our docker repo
    docker pull techrunnr/database:techrunnr-mongodb

    make sure that this image is available at 3 nodes, otherwise it can not create successful docker service, it will throw a task failure error

  • Now we will create 3 docker service connected to our techrunnr overlay network
    docker service create --name techrunnr-1 --network techrunnr techrunnr/database:techrunnr-mongodb mongod --replSet "techrunnr"
    
    
    docker service create --name techrunnr-2 --network techrunnr techrunnr/database:techrunnr-mongodb mongod --replSet "techrunnr"
    
    
    docker service create --name techrunnr-3 --network techrunnr techrunnr/database:techrunnr-mongodb mongod --replSet "techrunnr"
  • Now connect to the container on each node,we
    docker ps
    
    
    docker exec -it <container_ID> mongo

  • In mongo shell execute rs.status(), to see the replication setup
    techrunnr:PRIMARY> rs.status()
    {
    	"set" : "techrunnr",
    	"date" : ISODate("2019-06-18T13:00:00.268Z"),
    	"myState" : 1,
    	"term" : NumberLong(1),
    	"syncingTo" : "",
    	"syncSourceHost" : "",
    	"syncSourceId" : -1,
    	"heartbeatIntervalMillis" : NumberLong(2000),
    	"optimes" : {
    		"lastCommittedOpTime" : {
    			"ts" : Timestamp(1560862791, 4),
    			"t" : NumberLong(1)
    		},
    		"readConcernMajorityOpTime" : {
    			"ts" : Timestamp(1560862791, 4),
    			"t" : NumberLong(1)
    		},
    		"appliedOpTime" : {
    			"ts" : Timestamp(1560862791, 4),
    			"t" : NumberLong(1)
    		},
    		"durableOpTime" : {
    			"ts" : Timestamp(1560862791, 4),
    			"t" : NumberLong(1)
    		}
    	},
    	"lastStableCheckpointTimestamp" : Timestamp(1560862750, 1),
    	"members" : [
    		{
    			"_id" : 0,
    			"name" : "techrunnr-1:27017",
    			"health" : 1,
    			"state" : 2,
    			"stateStr" : "SECONDARY",
    			"uptime" : 122,
    			"optime" : {
    				"ts" : Timestamp(1560862791, 4),
    				"t" : NumberLong(1)
    			},
    			"optimeDurable" : {
    				"ts" : Timestamp(1560862791, 4),
    				"t" : NumberLong(1)
    			},
    			"optimeDate" : ISODate("2019-06-18T12:59:51Z"),
    			"optimeDurableDate" : ISODate("2019-06-18T12:59:51Z"),
    			"lastHeartbeat" : ISODate("2019-06-18T12:59:58.971Z"),
    			"lastHeartbeatRecv" : ISODate("2019-06-18T12:59:59.981Z"),
    			"pingMs" : NumberLong(1),
    			"lastHeartbeatMessage" : "",
    			"syncingTo" : "techrunnr-2:27017",
    			"syncSourceHost" : "techrunnr-2:27017",
    			"syncSourceId" : 1,
    			"infoMessage" : "",
    			"configVersion" : 1
    		},
    		{
    			"_id" : 1,
    			"name" : "techrunnr-2:27017",
    			"health" : 1,
    			"state" : 1,
    			"stateStr" : "PRIMARY",
    			"uptime" : 309,
    			"optime" : {
    				"ts" : Timestamp(1560862791, 4),
    				"t" : NumberLong(1)
    			},
    			"optimeDate" : ISODate("2019-06-18T12:59:51Z"),
    			"syncingTo" : "",
    			"syncSourceHost" : "",
    			"syncSourceId" : -1,
    			"infoMessage" : "could not find member to sync from",
    			"electionTime" : Timestamp(1560862688, 1),
    			"electionDate" : ISODate("2019-06-18T12:58:08Z"),
    			"configVersion" : 1,
    			"self" : true,
    			"lastHeartbeatMessage" : ""
    		},
    		{
    			"_id" : 2,
    			"name" : "techrunnr-3:27017",
    			"health" : 1,
    			"state" : 2,
    			"stateStr" : "SECONDARY",
    			"uptime" : 122,
    			"optime" : {
    				"ts" : Timestamp(1560862791, 4),
    				"t" : NumberLong(1)
    			},
    			"optimeDurable" : {
    				"ts" : Timestamp(1560862791, 4),
    				"t" : NumberLong(1)
    			},
    			"optimeDate" : ISODate("2019-06-18T12:59:51Z"),
    			"optimeDurableDate" : ISODate("2019-06-18T12:59:51Z"),
    			"lastHeartbeat" : ISODate("2019-06-18T12:59:58.969Z"),
    			"lastHeartbeatRecv" : ISODate("2019-06-18T12:59:59.981Z"),
    			"pingMs" : NumberLong(1),
    			"lastHeartbeatMessage" : "",
    			"syncingTo" : "techrunnr-2:27017",
    			"syncSourceHost" : "techrunnr-2:27017",
    			"syncSourceId" : 1,
    			"infoMessage" : "",
    			"configVersion" : 1
    		}
    	],
    	"ok" : 1,
    	"operationTime" : Timestamp(1560862791, 4),
    	"$clusterTime" : {
    		"clusterTime" : Timestamp(1560862791, 4),
    		"signature" : {
    			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
    			"keyId" : NumberLong(0)
    		}
    	}
    }

     

  • Now we can see replica setup
  • Now we will insert the some queries in to the primary node, we can check data replica on the other 2 nodes
    use techrunnr
    db.techrunnr.insert( { item: "card", qty: 15  } )
  • Now go to the other 2 nodes and execute show dbs commands
    show dbs
    show collections

  • Now stop the techrunnr-1 service which is acting as primary db,
    docker service rm techrunnr-2
  • then we can see that one of the other 2 nodes will become primary node
  • Now insert any queries in newly promoted primary node, and create same docker service which was removed and connect to the container, here we can see that former primary node become secondary and we can see the latest data replicated from primary node
  • For system sudden crash, we will shutdown the primary node, on the one of the other 2 nodes it will create one more service to participate in replication

 

© 2019, Techrunnr. All rights reserved.

#1
#2
#3
Questions Answered
Articles Written
Overall Points

0 Comments

Leave a Reply

Please wait...

Subscribe to our newsletter

Want to be notified when our article is published? Enter your email address and name below to be the first to know.