关于CentOS 8 搭建MongoDB4.4分片集群的问题

关于CentOS 8 搭建MongoDB4.4分片集群的问题,第1张

关于CentOS 8 搭建MongoDB4.4分片集群的问题 目录
  • 一,简介
    • 1.分片
    • 2.为什么使用分片
    • 3.分片原理概述
  • 二,准备环境
    • 三,集群配置部署
      • 四,测试服务器分片功能

        一,简介

        1.分片

        在MongoDB里面存在另一种集群,就是分片技术,可以满足MongoDB数据量大量增长的需求。
        在MongoDB存储海量数据时,一台机器可能不足以存储数据,也可能不足以提供可接受的读写吞吐量。这时,我们就可以通过在多台机器上分割数据,使得数据库系统能存储和处理更多的数据。

        2.为什么使用分片
        • 复制所有的写入 *** 作到主节点
        • 延迟的敏感数据会在主节点查询
        • 单个副本集限制在12个节点
        • 当请求量巨大时会出现内存不足
        • 本地磁盘不足
        • 垂直扩展价格昂贵

        3.分片原理概述

        分片就是把数据分成块,再把块存储到不同的服务器上,MongoDB的分片是自动分片的,当用户发送读写数据请求的时候,先经过mongos这个路由层,mongos路由层去配置服务器请求分片的信息,再来判断这个请求应该去哪一台服务器上读写数据。

        二,准备环境
        • *** 作系统:CentOS Linux release 8.2.2004 (Core)
        • MongoDB版本:v4.4.10
        • IP:10.0.0.56 实例:mongos(30000) config(27017) shard1主节点(40001) shard2仲裁节点(40002) shard3副节点(40003)
        • IP:10.0.0.57 实例:mongos(30000) config(27017) shard1副节点(40001) shard2主节点(40002) shard3仲裁节点(40003)
        • IP:10.0.0.58 实例:mongos(30000) config(27017) shard1仲裁节点(40001) shard3副节点(40002) shard3主节点(40003)

        三,集群配置部署

        1.创建相应目录(三台服务器执行相同 *** 作)

        mkdir -p /mongo/{data,logs,apps,run}
        mkdir -p /mongo/data/shard{1,2,3}
        mkdir -p /mongo/data/config
        mkdir -p /mongo/apps/conf
        

        2.安装MongoDB修改创建配置文件(三台执行相同 *** 作)
        安装教程
        安装可以通过下载MongoDB安装包,再进行配置环境变量。这里是直接配置yum源,通过yum源安装的MongoDB,后面直接执行mongod加所需配置文件路径运行即可。

        (1)mongo-config配置文件

        vim /mongo/apps/conf/mongo-config.yml
        
        systemLog:
          destination: file
        #日志路径
          path: "/mongo/logs/mongo-config.log"
          logAppend: true
        storage:
          journal:
            enabled: true
        #数据存储路径
          dbPath: "/mongo/data/config"
          engine: wiredTiger
          wiredTiger:
            engineConfig:
                 cacheSizeGB: 12
        processManagement:
          fork: true
          pidFilePath: "/mongo/run/mongo-config.pid"
        net:
        #这里ip可以设置为对应主机ip
          bindIp: 0.0.0.0
        #端口
          port: 27017
        setParameter:
          enableLocalhostAuthBypass: true
        replication:
        #复制集名称
          replSetName: "mgconfig"
        sharding:
        #作为配置服务
          clusterRole: configsvr
        

        (2)mongo-shard1配置文件

        vim /mongo/apps/conf/mongo-shard1.yml
        
        systemLog:
          destination: file
          path: "/mongo/logs/mongo-shard1.log"
          logAppend: true
        storage:
          journal:
            enabled: true
          dbPath: "/mongo/data/shard1"
        processManagement:
          fork: true
          pidFilePath: "/mongo/run/mongo-shard1.pid"
        net:
          bindIp: 0.0.0.0
          #注意修改端口
          port: 40001
        setParameter:
          enableLocalhostAuthBypass: true
        replication:
          #复制集名称
          replSetName: "shard1"
        sharding:
          #作为分片服务
          clusterRole: shardsvr
        

        (3)mongo-shard2配置文件

        vim /mongo/apps/conf/mongo-shard2.yml
        
        systemLog:
          destination: file
          path: "/mongo/logs/mongo-shard2.log"
          logAppend: true
        storage:
          journal:
            enabled: true
          dbPath: "/mongo/data/shard2"
        processManagement:
          fork: true
          pidFilePath: "/mongo/run/mongo-shard2.pid"
        net:
          bindIp: 0.0.0.0
          #注意修改端口
          port: 40002
        setParameter:
          enableLocalhostAuthBypass: true
        replication:
          #复制集名称
          replSetName: "shard2"
        sharding:
          #作为分片服务
          clusterRole: shardsvr
        

        (4)mongo-shard3配置文件

        vim /mongo/apps/conf/mongo-shard3.yml
        
        systemLog:
          destination: file
          path: "/mongo/logs/mongo-shard3.log"
          logAppend: true
        storage:
          journal:
            enabled: true
          dbPath: "/mongo/data/shard3"
        processManagement:
          fork: true
          pidFilePath: "/mongo/run/mongo-shard3.pid"
        net:
          bindIp: 0.0.0.0
          #注意修改端口
          port: 40003
        setParameter:
          enableLocalhostAuthBypass: true
        replication:
          #复制集名称
          replSetName: "shard3"
        sharding:
          #作为分片服务
          clusterRole: shardsvr
        

        (5)mongo-route配置文件

        vim /mongo/apps/conf/mongo-route.yml
        
        systemLog:
          destination: file
          #注意修改路径
          path: "/mongo/logs/mongo-route.log"
          logAppend: true
        processManagement:
          fork: true
          pidFilePath: "/mongo/run/mongo-route.pid"
        net:
          bindIp: 0.0.0.0
          #注意修改端口
          port: 30000
        setParameter:
          enableLocalhostAuthBypass: true
        replication:
          localPingThresholdMs: 15
        sharding:
          #关联配置服务
          configDB: mgconfig/10.0.0.56:27017,10.0.0.57:27017,10.0.0.58:27018
        

        3.启动mongo-config服务(三台服务器执行相同 *** 作)

        #关闭之前yum安装的MongoDB
        systemctl stop mongod
        
        cd /mongo/apps/conf/
        mongod --config mongo-config.yml
        
        #查看端口27017是否启动
        netstat -ntpl
        Active Internet connections (only servers)
        Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
        tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1129/sshd
        tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1131/cupsd
        tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      2514/sshd: root@pts
        tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      4384/sshd: root@pts
        tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      4905/mongod
        tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
        tcp6       0      0 :::22                   :::*                    LISTEN      1129/sshd
        tcp6       0      0 ::1:631                 :::*                    LISTEN      1131/cupsd
        tcp6       0      0 ::1:6010                :::*                    LISTEN      2514/sshd: root@pts
        tcp6       0      0 ::1:6011                :::*                    LISTEN      4384/sshd: root@pts
        tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
        

        4.连接一台实例,创建初始化复制集

        #连接mongo
        mongo 10.0.0.56:27017
        
        #配置初始化复制集,这里的mgconfig要和配置文件里的replSet的名称一致
        config={_id:"mgconfig",members:[ 
          {_id:0,host:"10.0.0.56:27017"},
          {_id:1,host:"10.0.0.57:27017"},
          {_id:2,host:"10.0.0.58:27017"}, 
        ]}
        
        rs.initiate(config)
        #ok返回1便是初始化成功
        {
        	"ok" : 1,
        	"$gleStats" : {
        		"lastOpTime" : Timestamp(1634710950, 1),
        		"electionId" : ObjectId("000000000000000000000000")
        	},
        	"lastCommittedOpTime" : Timestamp(0, 0)
        }
        
        #检查状态
        rs.status()
        
        {
        	"set" : "mgconfig",
        	"date" : ISODate("2021-10-20T06:24:24.277Z"),
        	"myState" : 1,
        	"term" : NumberLong(1),
        	"syncSourceHost" : "",
        	"syncSourceId" : -1,
        	"configsvr" : true,
        	"heartbeatIntervalMillis" : NumberLong(2000),
        	"majorityVoteCount" : 2,
        	"writeMajorityCount" : 2,
        	"votingMembersCount" : 3,
        	"writableVotingMembersCount" : 3,
        	"optimes" : {
        		"lastCommittedOpTime" : {
        			"ts" : Timestamp(1634711063, 1),
        			"t" : NumberLong(1)
        		},
        		"lastCommittedWallTime" : ISODate("2021-10-20T06:24:23.811Z"),
        		"readConcernMajorityOpTime" : {
        			"ts" : Timestamp(1634711063, 1),
        			"t" : NumberLong(1)
        		},
        		"readConcernMajorityWallTime" : ISODate("2021-10-20T06:24:23.811Z"),
        		"appliedOpTime" : {
        			"ts" : Timestamp(1634711063, 1),
        			"t" : NumberLong(1)
        		},
        		"durableOpTime" : {
        			"ts" : Timestamp(1634711063, 1),
        			"t" : NumberLong(1)
        		},
        		"lastAppliedWallTime" : ISODate("2021-10-20T06:24:23.811Z"),
        		"lastDurableWallTime" : ISODate("2021-10-20T06:24:23.811Z")
        	},
        	"lastStableRecoveryTimestamp" : Timestamp(1634711021, 1),
        	"electionCandidateMetrics" : {
        		"lastElectionReason" : "electionTimeout",
        		"lastElectionDate" : ISODate("2021-10-20T06:22:41.335Z"),
        		"electionTerm" : NumberLong(1),
        		"lastCommittedOpTimeAtElection" : {
        			"ts" : Timestamp(0, 0),
        			"t" : NumberLong(-1)
        		},
        		"lastSeenOpTimeAtElection" : {
        			"ts" : Timestamp(1634710950, 1),
        			"t" : NumberLong(-1)
        		},
        		"numVotesNeeded" : 2,
        		"priorityAtElection" : 1,
        		"electionTimeoutMillis" : NumberLong(10000),
        		"numCatchUpOps" : NumberLong(0),
        		"newTermStartDate" : ISODate("2021-10-20T06:22:41.509Z"),
        		"wMajorityWriteAvailabilityDate" : ISODate("2021-10-20T06:22:42.322Z")
        	},
        	"members" : [
        		{
        			"_id" : 0,
        			"name" : "10.0.0.56:27017",
        			"health" : 1,
        			"state" : 1,
        			"stateStr" : "PRIMARY",
        			"uptime" : 530,
        			"optime" : {
        				"ts" : Timestamp(1634711063, 1),
        				"t" : NumberLong(1)
        			},
        			"optimeDate" : ISODate("2021-10-20T06:24:23Z"),
        			"syncSourceHost" : "",
        			"syncSourceId" : -1,
        			"infoMessage" : "",
        			"electionTime" : Timestamp(1634710961, 1),
        			"electionDate" : ISODate("2021-10-20T06:22:41Z"),
        			"configVersion" : 1,
        			"configTerm" : 1,
        			"self" : true,
        			"lastHeartbeatMessage" : ""
        		},
        		{
        			"_id" : 1,
        			"name" : "10.0.0.57:27017",
        			"health" : 1,
        			"state" : 2,
        			"stateStr" : "SECONDARY",
        			"uptime" : 113,
        			"optime" : {
        				"ts" : Timestamp(1634711061, 1),
        				"t" : NumberLong(1)
        			},
        			"optimeDurable" : {
        				"ts" : Timestamp(1634711061, 1),
        				"t" : NumberLong(1)
        			},
        			"optimeDate" : ISODate("2021-10-20T06:24:21Z"),
        			"optimeDurableDate" : ISODate("2021-10-20T06:24:21Z"),
        			"lastHeartbeat" : ISODate("2021-10-20T06:24:22.487Z"),
        			"lastHeartbeatRecv" : ISODate("2021-10-20T06:24:22.906Z"),
        			"pingMs" : NumberLong(0),
        			"lastHeartbeatMessage" : "",
        			"syncSourceHost" : "10.0.0.56:27017",
        			"syncSourceId" : 0,
        			"infoMessage" : "",
        			"configVersion" : 1,
        			"configTerm" : 1
        		},
        		{
        			"_id" : 2,
        			"name" : "10.0.0.58:27017",
        			"health" : 1,
        			"state" : 2,
        			"stateStr" : "SECONDARY",
        			"uptime" : 113,
        			"optime" : {
        				"ts" : Timestamp(1634711062, 1),
        				"t" : NumberLong(1)
        			},
        			"optimeDurable" : {
        				"ts" : Timestamp(1634711062, 1),
        				"t" : NumberLong(1)
        			},
        			"optimeDate" : ISODate("2021-10-20T06:24:22Z"),
        			"optimeDurableDate" : ISODate("2021-10-20T06:24:22Z"),
        			"lastHeartbeat" : ISODate("2021-10-20T06:24:23.495Z"),
        			"lastHeartbeatRecv" : ISODate("2021-10-20T06:24:22.514Z"),
        			"pingMs" : NumberLong(0),
        			"lastHeartbeatMessage" : "",
        			"syncSourceHost" : "10.0.0.56:27017",
        			"syncSourceId" : 0,
        			"infoMessage" : "",
        			"configVersion" : 1,
        			"configTerm" : 1
        		}
        	],
        	"ok" : 1,
        	"$gleStats" : {
        		"lastOpTime" : Timestamp(1634710950, 1),
        		"electionId" : ObjectId("7fffffff0000000000000001")
        	},
        	"lastCommittedOpTime" : Timestamp(1634711063, 1),
        	"$clusterTime" : {
        		"clusterTime" : Timestamp(1634711063, 1),
        		"signature" : {
        			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
        			"keyId" : NumberLong(0)
        		}
        	},
        	"operationTime" : Timestamp(1634711063, 1)
        }
        

        5.配置部署shard1分片集群,启动shard1实例(三台执行同样 *** 作)

        cd /mongo/apps/conf
        mongod --config mongo-shard1.yml
        
        #查看端口40001是否启动
        netstat -ntpl
        Active Internet connections (only servers)
        Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
        tcp        0      0 0.0.0.0:40001           0.0.0.0:*               LISTEN      5742/mongod
        tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      5443/mongod
        tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
        tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1139/sshd
        tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1133/cupsd
        tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      2490/sshd: root@pts
        tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      5189/sshd: root@pts
        tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
        tcp6       0      0 :::22                   :::*                    LISTEN      1139/sshd
        tcp6       0      0 ::1:631                 :::*                    LISTEN      1133/cupsd
        tcp6       0      0 ::1:6010                :::*                    LISTEN      2490/sshd: root@pts
        tcp6       0      0 ::1:6011                :::*                    LISTEN      5189/sshd: root@pts
        

        6.连接一台实例,创建复制集

        #连接mongo
        mongo 10.0.0.56:40001
        
        #配置初始化复制集
        config={_id:"shard1",members:[ 
          {_id:0,host:"10.0.0.56:40001",priority:2},
          {_id:1,host:"10.0.0.57:40001",priority:1},
          {_id:2,host:"10.0.0.58:40001",arbiterOnly:true}, 
        ]}
        
        rs.initiate(config)
        #检查状态
        rs.status()
        

        7.配置部署shard2分片集群,启动shard1实例(三台执行同样 *** 作)

        cd /mongo/apps/conf
        mongod --config mongo-shard2.yml
        
        #查看端口40002是否启动
        netstat -ntpl
        Active Internet connections (only servers)
        Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
        tcp        0      0 0.0.0.0:40001           0.0.0.0:*               LISTEN      5742/mongod
        tcp        0      0 0.0.0.0:40002           0.0.0.0:*               LISTEN      5982/mongod
        tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      5443/mongod
        tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
        tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1139/sshd
        tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1133/cupsd
        tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      2490/sshd: root@pts
        tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      5189/sshd: root@pts
        tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
        tcp6       0      0 :::22                   :::*                    LISTEN      1139/sshd
        tcp6       0      0 ::1:631                 :::*                    LISTEN      1133/cupsd
        tcp6       0      0 ::1:6010                :::*                    LISTEN      2490/sshd: root@pts
        tcp6       0      0 ::1:6011                :::*                    LISTEN      5189/sshd: root@pts
        

        8.连接第二个节点创建复制集
        因为我们规划的shard2的主节点是10.0.0.57:40002,仲裁节点不能写数据,所以要连接10.0.0.57主机

        #连接mongo
        mongo 10.0.0.57:40002
        
        #创建初始化复制集
        config={_id:"shard2",members:[
          {_id:0,host:"10.0.0.56:40002",arbiterOnly:true}, 
          {_id:1,host:"10.0.0.57:40002",priority:2}, 
          {_id:2,host:"10.0.0.58:40002",priority:1}, 
        ]}
        
        rs.initiate(config)
        #查看状态
        rs.status()
        

        9.配置部署shard3分片集群,启动shard3实例(三台执行同样 *** 作)

        cd /mongo/apps/conf/
        mongod --config mongo-shard3.yml
        
        ##查看端口40003是否启动
        netstat -ntpl
        Active Internet connections (only servers)
        Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
        tcp        0      0 0.0.0.0:40001           0.0.0.0:*               LISTEN      5742/mongod
        tcp        0      0 0.0.0.0:40002           0.0.0.0:*               LISTEN      5982/mongod
        tcp        0      0 0.0.0.0:40003           0.0.0.0:*               LISTEN      6454/mongod
        tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      5443/mongod
        tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
        tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1139/sshd
        tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1133/cupsd
        tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      2490/sshd: root@pts
        tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      5189/sshd: root@pts
        tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
        tcp6       0      0 :::22                   :::*                    LISTEN      1139/sshd
        tcp6       0      0 ::1:631                 :::*                    LISTEN      1133/cupsd
        tcp6       0      0 ::1:6010                :::*                    LISTEN      2490/sshd: root@pts
        tcp6       0      0 ::1:6011                :::*                    LISTEN      5189/sshd: root@pts
        

        10.连接第三个节点(10.0.0.58:40003)创建复制集

        #连接mongo
        mongo 10.0.0.58:40003
        
        #创建初始化复制集
        config={_id:"shard3",members:[ 
          {_id:0,host:"10.0.0.56:40003",priority:1}, 
          {_id:1,host:"10.0.0.57:40003",arbiterOnly:true}, 
          {_id:2,host:"10.0.0.58:40003",priority:2}, 
        ]}
        
        rs.initiate(config)
        #查看状态
        rs.status()
        

        11.配置部署路由节点

        #路由节点启动登录用mongos
        mongos --config mongo-route.yml
        
        #连接添加分片到集群中
        mongo 10.0.0.56:30000
        
        sh.addShard("shard1/10.0.0.56:40001,10.0.0.57:40001,10.0.0.58:40001")
        sh.addShard("shard2/10.0.0.56:40002,10.0.0.57:40002,10.0.0.58:40002")
        sh.addShard("shard3/10.0.0.56:40003,10.0.0.57:40003,10.0.0.58:40003")
        
        #查看分片状态
        sh.status()
        

        四,测试服务器分片功能
        #查看所有库
        mongos> show dbs
        admin   0.000GB
        config  0.003GB
        #进入config
        use config
        #这里默认的chunk大小是64M,db.settings.find()可以看到这个值,这里为了测试看的清楚,把chunk调整为1M
        db.settings.save({"_id":"chunksize","value":1})
        

        模拟写入数据

        #在tydb库的tyuser表中循环写入6万条数据
        mongos> use tydb
        mongos> show tables
        mongos> for(i=1;i<=60000;i++){db.tyuser.insert({"id":i,"name":"ty"+i})}
        

        启用数据库分片

        mongos> sh.enableSharding("tydb")
        #ok返回1
        {
        	"ok" : 1,
        	"operationTime" : Timestamp(1634716737, 2),
        	"$clusterTime" : {
        		"clusterTime" : Timestamp(1634716737, 2),
        		"signature" : {
        			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
        			"keyId" : NumberLong(0)
        		}
        	}
        }
        

        启用表分片

        mongos> sh.shardCollection(”tydb.tyuser",{"id":1})
        

        查看分片情况

        mongos> sh.status()
        

        查看开启关闭平衡器

        #开启
        mongos> sh.startBalancer() #或者sh.startBalancer(true)
        #关闭
        mongos> sh.stopBalancer() #或者sh.stopBalancer(false)
        #查看是否关闭
        mongos> sh.getBalancerState() #返回flase表示关闭
        

        到此这篇关于CentOS 8 搭建MongoDB4.4分片集群的文章就介绍到这了,更多相关MongoDB分片集群内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

        欢迎分享,转载请注明来源:内存溢出

        原文地址: https://outofmemory.cn/sjk/884784.html

        (0)
        打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
        上一篇 2022-05-14
        下一篇 2022-05-14

        发表评论

        登录后才能评论

        评论列表(0条)

        保存