MongoDB集群-带访问控制的分片副本集


MongoDB集群-带访问控制的分片副本集

1.环境:

集群部署采用三台服务器,部署三个分片,每个分片三个副本。各个分片之间是完全独立的,一个database的数据只能落在一个分片上。

server:172.18.1.31 172.18.1.32 172.18.1.33
OS:CentOS 7.2
MongoDB Version:v4.0.5

服务部署结构如下:

172.18.1.31 172.18.1.32 172.18.1.33
config:27017 config:27017 config:27017
mongos:27018 mongos:27018 mongos:27018
shard01:27101 shard01:27101 shard01:27101
shard02:27102 shard02:27102 shard02:27102
shard03:27103 shard03:27103 shard03:27103

2.安装

2.1 添加yum源

    cat > /etc/yum.repos.d/mongodb-org-4.0.repo << EOF
    [mongodb-org-4.0]
    name=MongoDB Repository
    baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/4.0/x86_64/
    gpgcheck=1
    enabled=1
    gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
    EOF

2.2 安装

    # yum -y install mongodb-org

2.3 删除yum安装的mongod服务

    # systemctl disable mongod
    # rm –f /usr/lib/systemd/system/mongod.service
    # systemctl daemon-reload

3.高可用集群部署

3.1 配置文件准备

每台服务器上都运行monogs、config、shard01、shard02、shard03服务,分别对应一个配置文件,统一将配置文件存放在/etc/mongodb/目录下。

    # mkdir /etc/mongodb
    # chown -R mongod.mongod /etc/mongodb

将config和shard的数据保存在/data/mongodb/目录下。

    # mkdir -p /data/mongodb/{config,shard01,shard02,shard03}/data /data/mongodb/mongos
    # chown -R mongod.mongod /data/mongodb

日志统一放在/data/logs/mongodb目录下

    # mkdir /data/logs/mongodb
    # chown -R mongod.mongod /data/logs/mongodb

config配置

    vim /etc/mongodb/config.conf
    # where to write logging data.
    systemLog:
      destination: file
      logAppend: true
      path: /data/logs/mongodb/config.log

    # Where and how to store data.
    storage:
      dbPath: /data/mongodb/config/data
      journal:
        enabled: true

    # how the process runs
    processManagement:
      fork: true
      pidFilePath: /data/mongodb/config/mongodb-config.pid
      timeZoneInfo: /usr/share/zoneinfo

    # network interfaces
    net:
      port: 27018
      bindIp: 0.0.0.0
      unixDomainSocket:
        pathPrefix: /var/run/mongodb

    #operationProfiling:
    replication:
        replSetName: ussmongo-config

    sharding:
        clusterRole: configsvr

shard01配置

    cat /etc/mongodb/shard01.conf 
    # where to write logging data.
    systemLog:
      destination: file
      logAppend: true
      path: /data/logs/mongodb/shard01.log
      logRotate: rename

    # Where and how to store data.
    storage:
      dbPath: /data/mongodb/shard01/data
      journal:
        enabled: true
      wiredTiger:
        engineConfig:
           cacheSizeGB: 20

    # how the process runs
    processManagement:
      fork: true
      pidFilePath: /data/mongodb/shard01/mongodb-shard01.pid
      timeZoneInfo: /usr/share/zoneinfo

    # network interfaces
    net:
      port: 27101
      bindIp: 0.0.0.0
      unixDomainSocket:
        pathPrefix: /var/run/mongodb

    #operationProfiling:
    replication:
        replSetName: ussmongo-shard01

    sharding:
        clusterRole: shardsvr

shard02配置

    cat /etc/mongodb/shard02.conf 
    # where to write logging data.
    systemLog:
      destination: file
      logAppend: true
      path: /data/logs/mongodb/shard02.log

    # Where and how to store data.
    storage:
      dbPath: /data/mongodb/shard02/data
      journal:
        enabled: true
      wiredTiger:
        engineConfig:
           cacheSizeGB: 20

    # how the process runs
    processManagement:
      fork: true
      pidFilePath: /data/mongodb/shard02/mongodb-shard02.pid
      timeZoneInfo: /usr/share/zoneinfo

    # network interfaces
    net:
      port: 27102
      bindIp: 0.0.0.0
      unixDomainSocket:
        pathPrefix: /var/run/mongodb

    #operationProfiling:
    replication:
        replSetName: ussmongo-shard02

    sharding:
        clusterRole: shardsvr

shard03配置

    # cat /etc/mongodb/shard03.conf 
    # where to write logging data.
    systemLog:
      destination: file
      logAppend: true
      path: /data/logs/mongodb/shard03.log

    # Where and how to store data.
    storage:
      dbPath: /data/mongodb/shard03/data
      journal:
        enabled: true
      wiredTiger:
        engineConfig:
           cacheSizeGB: 20

    # how the process runs
    processManagement:
      fork: true
      pidFilePath: /data/mongodb/shard03/mongodb-shard03.pid
      timeZoneInfo: /usr/share/zoneinfo

    # network interfaces
    net:
      port: 27103
      bindIp: 0.0.0.0
      unixDomainSocket:
        pathPrefix: /var/run/mongodb

    #operationProfiling:
    replication:
        replSetName: ussmongo-shard03

    sharding:
        clusterRole: shardsvr

mongos配置

    # cat /etc/mongodb/mongos.conf 
    systemLog:
      destination: file
      logAppend: true
      path: /data/logs/mongodb/mongos.log

    processManagement:
      fork: true
    #  pidFilePath: /data/mongodb/mongos.pid

    # network interfaces
    net:
      port: 27017
      bindIp: 0.0.0.0
      unixDomainSocket:
        pathPrefix: /var/run/mongodb

    sharding:
       configDB: ussmongo-config/172.18.1.31:27018,172.18.1.32:27018,172.18.1.33:27018

    setParameter:
      diagnosticDataCollectionDirectoryPath: /data/mongodb/mongos/diagnostic.data/

3.2 服务文件准备

为了方便对进程的统一管理,将其以服务的形式运行,亦可以使其开机自动启动。

mongo-shard

    # cat /usr/lib/systemd/system/mongo-shard@.service 
    [Unit]
    Description=MongoDB Database Shard Service
    After=network.target
    Documentation=https://docs.mongodb.org/manual
    PartOf=mongo-shard.target

    [Service]
    User=mongod
    Group=mongod
    Environment="OPTIONS=--quiet -f /etc/mongodb/shard%i.conf"
    EnvironmentFile=-/etc/sysconfig/mongod
    ExecStart=/usr/bin/mongod $OPTIONS
    PermissionsStartOnly=true
    Type=forking
    TasksMax=infinity
    TasksAccounting=false

    [Install]
    WantedBy=mongo-shard.target

mongo-config

    # cat /usr/lib/systemd/system/mongo-config.service 
    [Unit]
    Description=MongoDB Database Config Service
    After=network.target
    Documentation=https://docs.mongodb.org/manual
    PartOf=mongo.target

    [Service]
    User=mongod
    Group=mongod
    Environment="OPTIONS=--quiet -f /etc/mongodb/config.conf"
    EnvironmentFile=-/etc/sysconfig/mongod
    ExecStart=/usr/bin/mongod $OPTIONS
    PermissionsStartOnly=true
    Type=forking
    TasksMax=infinity
    TasksAccounting=false

    [Install]
    WantedBy=mongo.target

mongo-mongos

    # cat /usr/lib/systemd/system/mongos.service 
    [Unit]
    Description=MongoDB Database Service
    After=syslog.target network.target
    PartOf=mongo.target

    [Service]
    User=mongod
    Group=mongod
    Environment="OPTIONS=--quiet -f /etc/mongodb/mongos.conf"
    ExecStart=/usr/bin/mongos $OPTIONS
    Type=forking
    PrivateTmp=true
    LimitNOFILE=64000
    TimeoutStartSec=180

    [Install]
    WantedBy=mongo.target    

为了便于批量管理,创建target文件

/usr/lib/systemd/system/mongo-shard.target

    [Unit]
    Description=mongo shard target allowing to start/stop all mongo-shard@.service instances at once
    PartOf=mongo.target

    [Install]
    WantedBy=mongo.target

/usr/lib/systemd/system/mongo.target

    [Unit]
    Description=mongo target allowing to start/stop all mongo*.service instances at once

    [Install]
    WantedBy=multi-user.target

加载服务并使其开机自启动:

    # systemctl daemon-reload
    # systemctl enable mongo-shard@01
    # systemctl enable mongo-shard@02
    # systemctl enable mongo-shard@03
    # systemctl enable mongo-config
    # systemctl enable mongos
    # systemctl enable mongo-shard.target
    # systemctl enable mongo.target

将上述配置文件及服务文件copy到所有节点的对于目录下

启动服务

    systemctl start mongo.target

这个时候mongos会启动不了,需要先配置副本集

3.3 配置副本集

config和shard服务本质上都是mongod进程,将他们都配置为三副本模式。下面的操作可以在三个节点中的任意一个上执行,只需要执行一遍。

config副本集

    config = {
    _id : "ussmongo-config",
     members : [
         {_id : 0, host : "172.18.1.31:27018" },
         {_id : 1, host : "172.18.1.32:27018" },
         {_id : 2, host : "172.18.1.33:27018" }
     ]
    }

shard01副本集

    config = {
    _id : "ussmongo-shard01",
     members : [
         {_id : 0, host : "172.18.1.31:27101" },
         {_id : 1, host : "172.18.1.32:27101" },
         {_id : 2, host : "172.18.1.33:27101" }
     ]
    }

shard02副本集

    config = {
    _id : "ussmongo-shard02",
     members : [
         {_id : 0, host : "172.18.1.31:27102" },
         {_id : 1, host : "172.18.1.32:27102" },
         {_id : 2, host : "172.18.1.33:27102" }
     ]
    }

shard03副本集

    config = {
    _id : "ussmongo-shard03",
     members : [
         {_id : 0, host : "172.18.1.31:27103" },
         {_id : 1, host : "172.18.1.32:27103" },
         {_id : 2, host : "172.18.1.33:27103" }
     ]
    }

此时,重启集群,mongs服务可以正常启动

    systemctl start mongo.target

3.4 配置路由分片

mongos对外提供服务,是集群的入口。需要先将分片添加到mongos配置中:
需要在所有节点执行

    # mongo --port 27017
    mongos> use admin
    mongos> sh.addShard("ussmongo-shard01/172.18.1.31:27101,172.18.1.32:27101,172.18.1.33:27101")
    mongos> sh.addShard("ussmongo-shard02/172.18.1.31:27102,172.18.1.32:27102,172.18.1.33:27102")
    mongos> sh.addShard("ussmongo-shard03/172.18.1.31:27103,172.18.1.32:27103,172.18.1.33:27103")

至此,多副本分片的高可用集群搭建完成,mongodb可以正常对外提供服务,可以在三台mongos之上再配置上负载均衡,高可用集群就完成了。

4.启动访问控制

在生产环境中,没有认证连接集群是不安全的,也是不允许的,所以都需要开启安全认证。

4.1 添加用户

连接上mongos添加的用户会保存在config副本集中,但是不会保存在shard副本集中,因此添加用户的操作需要分别在config、shard01、shard02、shard03上执行。
在执行的时候,只有在primary节点上才能执行成功,注意执行节点。

config副本集:

    # mongo --port 27018
    > use admin
    > db.createUser(
       {
         user: "admin",
         pwd: "admin",
         roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase", "clusterAdmin"]
       }
     )

shard01副本集

    # mongo --port 27101
    > use admin
    > db.createUser(
       {
         user: "admin",
         pwd: "admin",
         roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase", "clusterAdmin"]
       }
    )

shard02副本集

    # mongo --port 27102
    > use admin
    > db.createUser(
       {
         user: "admin",
         pwd: "admin",
         roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase", "clusterAdmin"]
       }
    )    

shard03副本集

mongo --port 27103
    > use admin
    > db.createUser(
       {
         user: "admin",
         pwd: "admin",
         roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase", "clusterAdmin"]
       }
    )

4.2启用访问控制

创建密钥文件
启用访问控制之后,外部访问MongoDB服务需要进行身份验证,而mongos访问config和shard服务则是通过配置的秘钥文件。

openssl rand -base64 756 >/data/mongodb/ussmongo.key
chmod 0600 /data/mongodb/ussmongo.key
chown mongod:mongod /data/mongodb/ussmongo.key

将密钥文件复制到所有的节点上

添加security配置
mongos配置增加

    security:
      keyFile: /data/mongodb/ussmongo.key

config和shard配置增加

    security:
      authorization: enabled
      keyFile: /data/mongodb/ussmongo.key

重启所有服务

    # systemctl restart mongo.target

至此带访问控制的Mongodb高可用集群部署完成

5.日志切割

5.1 创建日志切割脚本

    cat /opt/scripts/lograte_mongod.sh 
    #!/bin/bash
    #Rotate the MongoDB logs to prevent a single logfile from consuming too much disk space.

    app=mongod

    mongodPath=/usr/bin

    pidArray=$(pidof $mongodPath/$app)

    for pid in $pidArray;do
    if [ $pid ]
    then
        echo $pid
        kill -USR1 $pid
    fi
    done

    exit
    
    chmod +x /opt/scripts/lograte_mongod.sh

5.2 定时任务

    crontab -e
    # mognod log lograte
    00 00 * * * /opt/scripts/lograte_mongod.sh >/dev/null 2>&1

参考文档:https://yq.aliyun.com/articles/625991?spm=5176.10695662.1996646101.searchclickresult.374f3653bPUSs8

6.问题处理

6.1 MongDB集群异常关闭修复

机器异常重启后,集群无法启动,需要修复数据

# 删除lock文件
rm -rf /data/mongodb/data/mongod.lock
rm -rf /data/mongodb/data/WiredTiger.lock

# 数据修复,每个分片都需要执行修复操作
mongod --repair -f /etc/mongodb/shard01.conf -nojournal --repairpath /data/mongodb/shard01/data/diagnostic.data/
mongod --repair -f /etc/mongodb/shard02.conf -nojournal --repairpath /data/mongodb/shard02/data/diagnostic.data/
mongod --repair -f /etc/mongodb/shard03.conf -nojournal --repairpath /data/mongodb/shard03/data/diagnostic.data/

# 重启分片
systemctl restart mongo-shard@01
systemctl restart mongo-shard@02
systemctl restart mongo-shard@03
systemctl restart mongo-config
systemctl restart mongos

文章作者: daydayops
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 daydayops !
评论
  目录