MongoDB4.2.0分片+副本集集群搭建

现根据需求在平台搭建一个有16台主机的MongoDB集群,主要以下关键点:

  • 根据最新版本MongoDB推荐,配置文件采用yaml方式来配置
  • 一共16台服务器,即16个节点。对数据集进行分片,共分8个shard,3个config节点,5个路由节点

环境准备

服务器规划:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
CentOS Linux release 7.6.1810
MongoDB 4.2.0
mongodb-linux-x86_64-4.2.0.tgz 二进制包
shard分片主机:
shard1:
IP:cache05:27001(主)
IP:cache06:27001(备)
IP:cache08:27001(仲)
shard2:
IP:cache07:27002(主)
IP:cache08:27002(备)
IP:cache06:27002(仲)
shard3:
IP:cache09:27003(主)
IP:cache10:27003(备)
IP:cache11:27003(仲)
shard4:
IP:cache11:27004(主)
IP:cache12:27004(备)
IP:cache10:27004(仲)
shard5:
IP:cache13:27005(主)
IP:cache14:27005(备)
IP:cache16:27005(仲)
shard6:
IP:cache15:27006(主)
IP:cache16:27006(备)
IP:cache14:27007(仲)
shard7:
IP:cache17:27007(主)
IP:cache18:27007(备)
IP:cache20:27008(仲)
shard8:
IP:cache19:27008(主)
IP:cache20:27008(备)
IP:cache18:27008(仲)
configsrv主机:
IP:cache06:28000
IP:cache08:28000
IP:cache10:28000

Route(mongods)主机:
IP:cache12:29000
IP:cache14:29000
IP:cache16:29000
IP:cache18:29000
IP:cache20:29000

创建用户

创建用户mongod,用于安装使用MongoDB 数据库。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
在任意主机 上执行命令,创建用户
ssh root@cache05 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache06 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache07 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache08 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache09 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache10 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache11 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache12 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache13 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache14 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache15 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache16 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache17 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache18 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache19 "useradd -d /mongod mongod && passwd mongod"
ssh root@cache20 "useradd -d /mongod mongod && passwd mongod"

免密登陆

我们选取 cache20 作为集群的管理机器,做免密的目的是管理其他节点时,不需要再输入密码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#cache20  上执行
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub cache20
ssh-copy-id -i ~/.ssh/id_rsa.pub cache19
ssh-copy-id -i ~/.ssh/id_rsa.pub cache18
ssh-copy-id -i ~/.ssh/id_rsa.pub cache17
ssh-copy-id -i ~/.ssh/id_rsa.pub cache16
ssh-copy-id -i ~/.ssh/id_rsa.pub cache15
ssh-copy-id -i ~/.ssh/id_rsa.pub cache14
ssh-copy-id -i ~/.ssh/id_rsa.pub cache13
ssh-copy-id -i ~/.ssh/id_rsa.pub cache12
ssh-copy-id -i ~/.ssh/id_rsa.pub cache11
ssh-copy-id -i ~/.ssh/id_rsa.pub cache10
ssh-copy-id -i ~/.ssh/id_rsa.pub cache09
ssh-copy-id -i ~/.ssh/id_rsa.pub cache08
ssh-copy-id -i ~/.ssh/id_rsa.pub cache07
ssh-copy-id -i ~/.ssh/id_rsa.pub cache06
ssh-copy-id -i ~/.ssh/id_rsa.pub cache05

MongoDB安装

安装并配置环境变量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#将安装包mongodb-linux-x86_64-4.2.0.tgz 拷贝到每台机器的家目录
#在主机 cache20 上执行命令
ssh mongod@cache05 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache06 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache07 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache08 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache09 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache10 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache11 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache12 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache13 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache14 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache15 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache16 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache17 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache18 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache19 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
ssh mongod@cache20 "mkdir -p /mongod/Tools/ && tar -zvxf mongodb-linux-x86_64-4.2.0.tgz -C /mongod/Tools/ && mv /mongod/Tools/mongodb-linux-x86_64-4.2.0 /mongod/Tools/mongodb"
配置环境变量
vim ~/.bash_profile
#加入以下内容
export PATH=/mongod/Tools/mongodb/bin:$PATH
立即生效
source ~/.bash_profile
#根据要求 数据文件 放在 /data 下面
ssh root@cache20 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache19 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache18 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache17 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache16 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache15 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache14 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache13 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache12 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache11 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache10 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache09 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache08 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache07 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache06 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"
ssh root@cache05 "mkdir /data/mongodata && chown mongod:mongod /data/mongodata"

规划目录结构

规划相关的目录结构,并且新建文件夹

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#mongo服务通过配置文件启动,存放配置文件目录/mongod/Tools/mongodb/conf/
#存放日志、进程管理信息的目录/mongod/Tools/mongodb/log/
#数据存放目录/data/mongodata

#在 cache20 上执行命令,创建文件夹
ssh mongod@cache05 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard1"
ssh mongod@cache06 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/configsrv /data/mongodata/shard1 /data/mongodata/shard2"
ssh mongod@cache07 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard2"
ssh mongod@cache08 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/configsrv /data/mongodata/shard2 /data/mongodata/shard1"
ssh mongod@cache09 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard3"
ssh mongod@cache10 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/configsrv /data/mongodata/shard3 /data/mongodata/shard4"
ssh mongod@cache11 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard4"
ssh mongod@cache12 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard4 /data/mongodata/shard3"
ssh mongod@cache13 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard5"
ssh mongod@cache14 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard5 /data/mongodata/shard6"
ssh mongod@cache15 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard6"
ssh mongod@cache16 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard6 /data/mongodata/shard5"
ssh mongod@cache17 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard7"
ssh mongod@cache18 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard7 /data/mongodata/shard8"
ssh mongod@cache19 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard8"
ssh mongod@cache20 "cd /mongod/Tools/mongodb/ && mkdir conf log /data/mongodata/shard8 /data/mongodata/shard7"

集群配置

config server配置服务器(副本集)

根据服务器规划,在cache06、cache08、cache10上部署三台config server副本集,在该三台服务器上分别添加以下配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/configsvr.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/configsvr.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/configsrv
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/configsrv.pid

# network interfaces
net:
port: 28000
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: configs

sharding:
clusterRole: configsvr

启动这三台服务器的config server

1
mongod -f /mongod/Tools/mongodb/conf/configsvr.conf

登陆任意一台服务器,初始化副本集

1
2
3
4
5
6
7
8
9
10
mongo --port 28000 --host cache06
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致)
> config = {_id : "configs",members : [{_id : 0, host : "cache06:28000" },{_id : 1, host : "cache08:28000" },{_id : 2, host : "cache10:28000" }]}
#初始化副本集
> rs.initiate(config)
......
#查看分区状态
configs:SECONDARY> rs.status()
......
configs:PRIMARY>

shard server分片服务器(副本集)

配置shard1副本集

在cache05、cache06、cache08服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard1.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard1.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard1
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard1.pid

# network interfaces
net:
port: 27001
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard1

sharding:
clusterRole: shardsvr

启动这三台服务器的shard1 server

1
mongod -f /mongod/Tools/mongodb/conf/shard1.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27001 --host cache05
1
2
3
4
5
6
7
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard1",members : [{_id : 0, host : "cache05:27001", priority : 2 },{_id : 1, host : "cache06:27001", priority : 1 },{_id : 2, host : "cache08:27001", arbiterOnly :true}]}
> rs.initiate(config)
shard1:SECONDARY> rs.status();
......
shard1:PRIMARY>
......
配置shard2副本集

在cache07、cache08、cache06服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard2.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard2.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard2
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard2.pid

# network interfaces
net:
port: 27002
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard2

sharding:
clusterRole: shardsvr

启动这三台服务器的shard2 server

1
mongod -f /mongod/Tools/mongodb/conf/shard2.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27002 --host cache07
1
2
3
4
5
6
7
8
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard2",members : [{_id : 0, host : "cache07:27002", priority : 2 },{_id : 1, host : "cache08:27002", priority : 1 },{_id : 2, host : "cache06:27002", arbiterOnly :true}]}
......
> rs.initiate(config)
......
shard2:SECONDARY> rs.status();
......
shard2:PRIMARY>
配置shard3副本集

在cache09、cache10、cache12服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard3.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard3.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard3
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard3.pid

# network interfaces
net:
port: 27003
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard3

sharding:
clusterRole: shardsvr

启动这三台服务器的shard3 server

1
mongod -f /mongod/Tools/mongodb/conf/shard3.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27003 --host cache09
1
2
3
4
5
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard3",members : [{_id : 0, host : "cache09:27003", priority : 2 },{_id : 1, host : "cache10:27003", priority : 1 },{_id : 2, host : "cache12:27003", arbiterOnly :true}]}
> rs.initiate(config)
shard3:SECONDARY> rs.status();
shard3:PRIMARY>
配置shard4副本集

在cache11、cache12、cache10服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard4.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard4.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard4
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard4.pid

# network interfaces
net:
port: 27004
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard4

sharding:
clusterRole: shardsvr

启动这三台服务器的shard4 server

1
mongod -f /mongod/Tools/mongodb/conf/shard4.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27004 --host cache11
1
2
3
4
5
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard4",members : [{_id : 0, host : "cache11:27004", priority : 2 },{_id : 1, host : "cache12:27004", priority : 1 },{_id : 2, host : "cache10:27004", arbiterOnly :true}]}
> rs.initiate(config)
shard3:SECONDARY> rs.status();
shard3:PRIMARY>
配置shard5副本集

在cache13、cache14、cache16服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard5.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard5.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard5
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard5.pid

# network interfaces
net:
port: 27005
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard5

sharding:
clusterRole: shardsvr

启动这三台服务器的shard5 server

1
mongod -f /mongod/Tools/mongodb/conf/shard5.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27005 --host cache13
1
2
3
4
5
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard5",members : [{_id : 0, host : "cache13:27005", priority : 2 },{_id : 1, host : "cache14:27005", priority : 1 },{_id : 2, host : "cache16:27005", arbiterOnly :true}]}
> rs.initiate(config)
shard3:SECONDARY> rs.status();
shard3:PRIMARY>
配置shard6副本集

在cache15、cache16、cache14服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard6.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard6.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard6
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard6.pid

# network interfaces
net:
port: 27006
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard6

sharding:
clusterRole: shardsvr

启动这三台服务器的shard6 server

1
mongod -f /mongod/Tools/mongodb/conf/shard6.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27006 --host cache15
1
2
3
4
5
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard6",members : [{_id : 0, host : "cache15:27006", priority : 2 },{_id : 1, host : "cache16:27006", priority : 1 },{_id : 2, host : "cache14:27006", arbiterOnly :true}]}
> rs.initiate(config)
shard3:SECONDARY> rs.status();
shard3:PRIMARY>
配置shard7副本集

在cache17、cache18、cache20服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard7.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard7.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard7
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard7.pid

# network interfaces
net:
port: 27007
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard7

sharding:
clusterRole: shardsvr

启动这三台服务器的shard3 server

1
mongod -f /mongod/Tools/mongodb/conf/shard7.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27007 --host cache17
1
2
3
4
5
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard7",members : [{_id : 0, host : "cache17:27007", priority : 2 },{_id : 1, host : "cache18:27007", priority : 1 },{_id : 2, host : "cache20:27007", arbiterOnly :true}]}
> rs.initiate(config)
shard3:SECONDARY> rs.status();
shard3:PRIMARY>
配置shard8副本集

在cache19、cache20、cache18服务器上做以下配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
vim /mongod/Tools/mongodb/conf/shard8.conf

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/shard8.log

# Where and how to store data.
storage:
dbPath: /data/mongodata/shard8
journal:
enabled: true

# how the process runs
processManagement:
fork: true
pidFilePath: /mongod/Tools/mongodb/log/shard8.pid

# network interfaces
net:
port: 27008
bindIp: 0.0.0.0

#operationProfiling:
replication:
replSetName: shard8

sharding:
clusterRole: shardsvr

启动这三台服务器的shard8 server

1
mongod -f /mongod/Tools/mongodb/conf/shard8.conf

登陆任意一台服务器,初始化副本集

1
mongo --port 27008 --host cache19
1
2
3
4
5
#定义副本集配置(键“_id”对应的值必须与配置文件中的replicaction.replSetName一致,priority代表权重[1,100],大的被分配为主服务器,0永久不会变为主服务器)
> config = {_id : "shard8",members : [{_id : 0, host : "cache19:27008", priority : 2 },{_id : 1, host : "cache20:27008", priority : 1 },{_id : 2, host : "cache18:27008", arbiterOnly :true}]}
> rs.initiate(config)
shard3:SECONDARY> rs.status();
shard3:PRIMARY>

mongos server路由服务器

根据服务器规划,在cache12、cache14、cache16、cache18、cache20上部署路由服务器,在该5台服务器上分别添加以下配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
vim /mongod/Tools/mongodb/conf/mongos.conf

systemLog:
destination: file
logAppend: true
path: /mongod/Tools/mongodb/log/mongos.log
processManagement:
fork: true
# pidFilePath: /mongod/Tools/mongodb/log/mongos.pid

# network interfaces
net:
port: 29000
bindIp: 0.0.0.0
#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
sharding:
configDB: configs/cache07:28000,cache08:28000,cache10:28000

启动这三台服务器的mongos server

1
mongos -f /mongod/Tools/mongodb/conf/mongos.conf

启用分片

目前已经搭建好配置服务器、数据分片服务器、路由服务器,下面进行分片启用,使得app连接到路由服务器时可以使用分片机制

登录任意一台mongos

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
mongo --port 29000 --host cache12
mongos> use admin
switched to db admin
mongos> sh.addShard("shard1/cache05:27001,cache06:27001,cache08:27001")
mongos> sh.addShard("shard2/cache07:27002,cache08:27002,cache06:27002")
mongos> sh.addShard("shard3/cache09:27003,cache10:27003,cache12:27003")
mongos> sh.addShard("shard4/cache11:27004,cache12:27004,cache10:27004")
mongos> sh.addShard("shard5/cache13:27005,cache14:27005,cache16:27005")
mongos> sh.addShard("shard6/cache15:27006,cache16:27006,cache14:27006")
mongos> sh.addShard("shard7/cache17:27007,cache18:27007,cache20:27007")
mongos> sh.addShard("shard8/cache19:27008,cache20:27008,cache18:27008")
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5d8048f71f14b07fb9b707e5")
}
shards:
{ "_id" : "shard1", "host" : "shard1/cache05:27001,cache06:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/cache07:27002,cache08:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/cache09:27003,cache10:27003", "state" : 1 }
{ "_id" : "shard4", "host" : "shard4/cache11:27004,cache12:27004", "state" : 1 }
{ "_id" : "shard5", "host" : "shard5/cache13:27005,cache14:27005", "state" : 1 }
{ "_id" : "shard6", "host" : "shard6/cache15:27006,cache16:27006", "state" : 1 }
{ "_id" : "shard7", "host" : "shard7/cache17:27007,cache18:27007", "state" : 1 }
{ "_id" : "shard8", "host" : "shard8/cache19:27008,cache20:27008", "state" : 1 }
active mongoses:
"4.2.0" : 5
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
mongos>

测试分片

连接mongodb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
mongo --port 29000 --host cache20

####为了展示出效果,修改一下默认的chunksize大小,这里修改为1M
#默认的chunksize大小为64M,示例修改命令如下:
#use config
#db.settings.save( { _id:"chunksize", value: <sizeInMB> } )

use config
db.settings.save( { _id:"chunksize", value: 1 } )
#test数据库开启分片
#选择一个片键age并指定一个集合mycoll对其进行分片

mongos> sh.enableSharding("test")
{
"ok" : 1,
"operationTime" : Timestamp(1562670609, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1562670609, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> sh.shardCollection("test.mycoll", {"age": 1})
{
"collectionsharded" : "test.mycoll",
"collectionUUID" : UUID("12a512b1-377a-406b-bde9-36c472fd2e0a"),
"ok" : 1,
"operationTime" : Timestamp(1562670619, 14),
"$clusterTime" : {
"clusterTime" : Timestamp(1562670619, 14),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
#测试分片,写入数据到数据库中

mongos> use test
switched to db test
mongos> for (i = 1; i <= 100000; i++) db.mycoll.insert({age:(i%100), name:"bigboss_user"+i, address:i+", Some Road, beijing", country:"China", course:"cousre"+"(i%12)"})
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5d24659a031dddbcb7ec8f7c")
}
shards:
{ "_id" : "shard1", "host" : "shard1/hadoop23:27001,hadoop24:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/hadoop26:27002,hadoop27:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/hadoop25:27003,hadoop29:27003", "state" : 1 }
active mongoses:
"4.0.9" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: yes
Collections with active migrations:
test.mycoll2 started at Tue Jul 09 2019 19:18:20 GMT+0800 (CST)
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
3 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
{ "_id" : "test", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("871fa9c9-e3e2-42a6-9e6b-0ddd3b69a302"), "lastMod" : 1 } }
test.mycoll
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
shard1 3
shard2 4
shard3 2
{ "age" : { "$minKey" : 1 } } -->> { "age" : 0 } on : shard2 Timestamp(2, 0)
{ "age" : 0 } -->> { "age" : 17 } on : shard1 Timestamp(4, 2)
{ "age" : 17 } -->> { "age" : 34 } on : shard1 Timestamp(4, 3)
{ "age" : 34 } -->> { "age" : 38 } on : shard1 Timestamp(4, 4)
{ "age" : 38 } -->> { "age" : 54 } on : shard2 Timestamp(4, 5)
{ "age" : 54 } -->> { "age" : 71 } on : shard2 Timestamp(4, 6)
{ "age" : 71 } -->> { "age" : 77 } on : shard2 Timestamp(4, 7)
{ "age" : 77 } -->> { "age" : 92 } on : shard3 Timestamp(4, 1)
{ "age" : 92 } -->> { "age" : { "$maxKey" : 1 } } on : shard3 Timestamp(2, 1)

分片成功

集群安全认证

生成密钥文件

​ 在keyfile身份验证中,副本集中的每个mongod实例都使用keyfile的内容作为共享密码,只有具有正确密钥文件的mongod或者mongos实例可以连接到副本集。密钥文件的内容必须在6到1024个字符之间,并且在unix/linux系统中文件所有者必须有对文件至少有读的权限。

1
2
3
4
#生成密钥文件:
openssl rand -base64 756 > /mongod/Tools/mongodb/KeyFile.file
chmod 400 /mongod/Tools/mongodb/KeyFile.file
#第一条命令是生成密钥文件,第二条命令是使用chmod更改文件权限,为文件所有者提供读权限

将密钥复制到集群中的每台机器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
scp /mongod/Tools/mongodb/KeyFile.file  cache05:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache06:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache07:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache08:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache09:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache10:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache11:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache12:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache13:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache14:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache15:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache16:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache17:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache18:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache19:/mongod/Tools/mongodb/
scp /mongod/Tools/mongodb/KeyFile.file cache20:/mongod/Tools/mongodb/
#一定要保证密钥文件一致。文件位置随便。但是为了方便查找,建议每台机器都放到一个固定的位置。

创建管理员账号和密码

账号可以在集群认开启认证以后添加。但是那时候添加比较谨慎。只能添加一次,如果忘记了就无法再连接到集群。建议在没开启集群认证的时候先添加好管理员用户名和密码然后再开启认证再重启

通常情况下,需要为集群添加用户,但是有些特殊情况需要对单个分片进行维护操作,所以我们需要创建但分片上的用户(Shard-local users,你不能通过mongos 进行连接。


Read:允许用户读取指定数据库
readWrite:允许用户读写指定数据库
dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile
userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户
clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。
readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限
readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限
userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的userAdmin权限
dbAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限。
root:只在admin数据库中可用。超级账号,超级权限

连接单分片的主节点和任意一台机器的mongos,分别执行下面操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 分别连接:
# mongod 权限
# mongo --port 27001 --host cache05
# mongo --port 27002 --host cache07
# mongo --port 27003 --host cache09
# mongo --port 27004 --host cache11
# mongo --port 27005 --host cache13
# mongo --port 27006 --host cache15
# mongo --port 27007 --host cache17
# mongo --port 27008 --host cache19
# mongos 权限
# mongo --port 29000 --host cache20

#注意一定要使用admin数据库
#use admin
#db.createUser({user:'family', pwd:'cmcc1234', roles:[{ role: "root", db: "admin" }]})
mongos> use admin
switched to db admin
mongos> db.createUser({user:'family', pwd:'cmcc1234', roles:[{ role: "root", db: "admin" }]})
Successfully added user: {
"user" : "family",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}

将集群中的所有mongod和mongos全部关闭,

然后删除每个mongod实例存储数据存储路径下面的mongod.lock(如果后面启动不报错可以不处理)

配置权限并且重启集群

依次在每台机器上的mongod**(注意是所有的mongod不是mongos)**的配置文件中加入下面一段配置

1
2
3
security:
keyFile: /mongod/Tools/mongodb/KeyFile.file
authorization: enabled

依次在每台机器上的mongos配置文件中加入下面一段配置

1
2
security:
keyFile: /mongod/Tools/mongodb/KeyFile.file

重启集群

集群管理

为了方便集群的管理,减少维护集群的复杂性,使用脚本来对集群统一操作。
但是脚本不能检测到所有的异常情况,请注意脚本的打印信息,如果异常,请手动启动,并联系开发,完善脚本。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
#!/bin/bash
#********************************************************************
#Author: jingshuai
#QQ: 315134590
#Date: 2019-09-19
#URL: http://www.jsledd.cn
#Description: The mongod script
#Copyright (C): cmri family
#********************************************************************
MONGODB_HOME="/mongod/Tools/mongodb/"
MONGODB_LOG_PATH="${MONGODB_HOME}/log"
MONGODB_BIN_PATH="${MONGODB_HOME}/bin"
MONGODB_CONF_PATH="${MONGODB_HOME}/conf"
MONGODB_DATA_PATH="/data/mongodata"
MONGODB_CONF_FILE="${MONGODB_CONF_PATH}/configsvr.conf"
MONGODB_CONFIGPID="${MONGODB_LOG}/configsrv.pid"
MONGODB_MONGOSPID="${MONGODB_LOG}/mongos.pid"
MONGODB_CONFIGHOSTS=("cache06" "cache08" "cache10")
MONGODB_MONGOSHOSTS=("cache12" "cache14" "cache16" "cache18" "cache20")
declare -A MONGODB_SHARDHOSTS=()
MONGODB_SHARDHOSTS["shard1"]="cache05,cache06,cache08"
MONGODB_SHARDHOSTS["shard2"]="cache07,cache08,cache06"
MONGODB_SHARDHOSTS["shard3"]="cache09,cache10,cache12"
MONGODB_SHARDHOSTS["shard4"]="cache11,cache12,cache10"
MONGODB_SHARDHOSTS["shard5"]="cache13,cache14,cache16"
MONGODB_SHARDHOSTS["shard6"]="cache15,cache16,cache14"
MONGODB_SHARDHOSTS["shard7"]="cache17,cache18,cache20"
MONGODB_SHARDHOSTS["shard8"]="cache19,cache20,cache18"

#
process_check(){
host=$1
pidfile=$2
serverid=$3
servername=$4
if ssh $host test -e $pidfile;then
PSID=`ssh $host "ps -ef | grep $(cat $pidfile)| grep -v grep"`
echo "查询host:$host 上的服务:$servername:$serverid"
[ -n "$PSID" ] && echo "--------------------\n${PSID}\n--------------------" || echo "$servername 进程ID:`ssh $host cat $pidfile` 不存在"
else
echo "找不到服务:$servername:$serverid,请手动检查。"
fi
}
process_stop(){
host=$1
pidfile=$2
rmfiles=$3
serverid=$4
servername=$5
echo "==========服务停止:$servername($serverid)开始启动=========="
if ssh $host test -e $pidfile;then
ssh $host "kill -2 $pidfile" && [ -n "$rmfiles" ] && ssh $host "rm -f $rmfiles"
else
echo "找不到服务:$servername:$serverid,请手动检查。"
fi
}
process_start(){
host=$1
process_cmd=$2
process_conf=$3
serverid=$4
servername=$5
echo "==========服务启动:$servername($serverid)开始启动=========="
ssh host "$process_cmd -f $process_conf"
}
MGDB_START(){
echo "--------------------服务启动-------------------"
echo -e "\033 ----------------configServer---------------- \033[0m"
for confighost in ${MONGODB_CONFIGHOSTS[@]};
do
process_start $confighost $MONGODB_BIN_PATH/bin/mongod $MONGODB_CONF_FILE "configsrv" "配置服务"
done
echo -e "\033 ----------------mongosServer---------------- \033[0m"
for mongoshost in ${MONGODB_MONGOSHOSTS[@]};
do
process_start $confighost $MONGODB_BIN_PATH/bin/mongos $MONGODB_CONF_FILE "mongos" "路由服务"
done
echo -e "\033 ----------------shardsServer---------------- \033[0m"
for key in ${!MONGODB_SHARDHOSTS[@]};
do
shardarray=(${MONGODB_SHARDHOSTS[$key]//,/ })
shardpid="$MONGODB_LOG_PATH/$key.pid"
for shardhost in ${shardarray[@]};
do
echo "================$key===================="
process_start $confighost $MONGODB_BIN_PATH/bin/mongos $MONGODB_CONF_FILE "分片数据服务"
echo "=========================================="
done
done
}
MGDB_STOP(){
echo "--------------------服务停止-------------------"
echo -e "\033 ----------------mongosServer---------------- \033[0m"
for mongoshost in ${MONGODB_MONGOSHOSTS[@]};
do
process_stop $mongoshost $MONGODB_MONGOSPID "" "mongos" "路由服务"
done
echo -e "\033 ----------------configServer---------------- \033[0m"
for confighost in ${MONGODB_CONFIGHOSTS[@]};
do
process_stop $confighost $MONGODB_CONFIGPID "$MONGODB_DATA_PATH/configsrv/mongod.lock" "configsrv" "配置服务"
done
echo -e "\033 ----------------shardsServer---------------- \033[0m"
for key in ${!MONGODB_SHARDHOSTS[@]};
do
shardarray=(${MONGODB_SHARDHOSTS[$key]//,/ })
shardpid="$MONGODB_LOG_PATH/$key.pid"
for shardhost in ${shardarray[@]};
do
echo "================$key===================="
process_stop $shardhost $shardpid "$MONGODB_DATA_PATH/$key/mongod.lock" $key "分片数据服务"
echo "=========================================="
done
done
}

MGDB_STATUS(){
echo "--------------------状态查询-------------------"
echo -e "\033 ----------------configServer---------------- \033[0m"
for confighost in ${MONGODB_CONFIGHOSTS[@]};
do
process_check $confighost $MONGODB_CONFIGPID "configsrv" "配置服务"
done
echo -e "\033 ----------------mongosServer---------------- \033[0m"
for mongoshost in ${MONGODB_MONGOSHOSTS[@]};
do
process_check $mongoshost $MONGODB_MONGOSPID "mongos" "路由服务"
done
echo -e "\033 ----------------shardsServer---------------- \033[0m"
for key in ${!MONGODB_SHARDHOSTS[@]};
do
shardarray=(${MONGODB_SHARDHOSTS[$key]//,/ })
shardpid="$MONGODB_LOG_PATH/$key.pid"
for shardhost in ${shardarray[@]};
do
echo "================$key===================="
process_check $shardhost $shardpid $key "分片数据服务"
echo "=========================================="
done
done
}
case "$1" in
start)
MGDB_START
;;
stop)
MGDB_STOP
;;
status)
MGDB_STATUS
;;
restart)
MGDB_STOP
MGDB_START
;;
*)
echo $"Usage: $0 { start | stop | status | restart }"
exit 1
esac