02 ZooKeeper分布式集群安装

02 ZooKeeper分布式集群安装,第1张

02 ZooKeeper分布式集群安装

文章目录
    • 02 ZooKeeper集群分布式安装
      • 1.安装前准备
      • 2. 集群的节点之间,实现两两之间免密登录
      • 3. JDK安装环境变量配置
      • 4. ZooKeeper 集群搭建

02 ZooKeeper集群分布式安装 1.安装前准备

  克隆4台虚拟机,初始配置(主机名,ip) ,并在xshell中连接

修改node1、node2、node3、node4中域名:

[root@node1 .ssh]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.236.31 node1
192.168.236.32 node2
192.168.236.33 node3
192.168.236.34 node4
2. 集群的节点之间,实现两两之间免密登录

  如何实现四台服务器两两之间免密登录?让机器之间产生公钥私钥对,私钥文件为:id_dsa,公钥文件为:id_dsa.pub。比如:node2上面有node1的公钥文件,那么node2就可以免密访问node1。同理,node2拥有node3的公钥文件,那么node2就可以免密访问node3。要实现免密登录,就要将公钥交给信任的节点。

*** 作步骤:

  1. 首先在四台服务器上都要执行,生成私钥与公钥文件

    [root@node1 apps]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
    Generating public/private dsa key pair.
    Created directory '/root/.ssh'.
    Your identification has been saved in /root/.ssh/id_dsa.
    Your public key has been saved in /root/.ssh/id_dsa.pub.
    The key fingerprint is:
    d5:d7:e6:18:33:b0:5e:58:57:15:4a:86:44:76:c2:ea root@node1
    The key's randomart image is:
    +--[ DSA 1024]----+
    |          +=o+..B|
    |          .+==.o |
    |          o +.* o|
    |         o . o B |
    |        S   . . .|
    |         E       |
    |                 |
    |                 |
    |                 |
    +-----------------+
    
    [root@node1 apps]# cd /root/.ssh/
    [root@node1 .ssh]# ll
    total 8
    -rw------- 1 root root 668 Dec 18 15:27 id_dsa
    -rw-r--r-- 1 root root 600 Dec 18 15:27 id_dsa.pub
    
  2. 在 node1 上将 node1 的公钥拷贝到 authorized_keys 中:

    [root@node1 .ssh]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    

    注意:authorized_keys千万不要写错。>:表示覆盖。>>:表示追加。

    将该文件拷贝给node2:

    [root@node1 .ssh]# scp ~/.ssh/authorized_keys node2:/root/.ssh/
    The authenticity of host 'node2 (192.168.236.32)' can't be established.
    RSA key fingerprint is 8e:0e:3f:82:c9:aa:4f:bd:4d:b0:b3:9c:82:a7:b9:86.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node2,192.168.236.32' (RSA) to the list of known hosts.
    root@node2's password: 
    authorized_keys                                                 100%  600     0.6KB/s   00:00    
    
  3. 在 node2 中将 node2 的公钥追加到 authorized_keys 中:

    [root@node2 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    

    将该文件拷贝给 node3:

    [root@node2 ~]# scp ~/.ssh/authorized_keys node3:/root/.ssh/
    The authenticity of host 'node3 (192.168.236.33)' can't be established.
    RSA key fingerprint is 8e:0e:3f:82:c9:aa:4f:bd:4d:b0:b3:9c:82:a7:b9:86.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node3,192.168.236.33' (RSA) to the list of known hosts.
    root@node3's password: 
    authorized_keys                                                 100% 1200     1.2KB/s   00:00    
    
  4. 在 node3 中将 node3 的公钥追加到 authorized_keys 中:

    [root@node3 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    

    将文件拷贝给node4:

    [root@node3 ~]# scp ~/.ssh/authorized_keys node4:/root/.ssh/
    The authenticity of host 'node4 (192.168.236.34)' can't be established.
    RSA key fingerprint is 8e:0e:3f:82:c9:aa:4f:bd:4d:b0:b3:9c:82:a7:b9:86.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node4,192.168.236.34' (RSA) to the list of known hosts.
    root@node4's password: 
    authorized_keys                                                 100% 1800     1.8KB/s   00:00    
    
  5. 在 node4 中将 node4 的公钥追加到 authorized_keys 中:

    [root@node4 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    

    将authorized_keys文件拷贝给node1、node2、node3:

    [root@node4 ~]# scp ~/.ssh/authorized_keys node1:/root/.ssh/
    The authenticity of host 'node1 (192.168.236.31)' can't be established.
    RSA key fingerprint is 8e:0e:3f:82:c9:aa:4f:bd:4d:b0:b3:9c:82:a7:b9:86.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node1,192.168.236.31' (RSA) to the list of known hosts.
    root@node1's password: 
    authorized_keys                                                 100% 2400     2.3KB/s   00:00    
    [root@node4 ~]# scp ~/.ssh/authorized_keys node2:/root/.ssh/
    The authenticity of host 'node2 (192.168.236.32)' can't be established.
    RSA key fingerprint is 8e:0e:3f:82:c9:aa:4f:bd:4d:b0:b3:9c:82:a7:b9:86.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node2,192.168.236.32' (RSA) to the list of known hosts.
    root@node2's password: 
    authorized_keys                                                 100% 2400     2.3KB/s   00:00    
    [root@node4 ~]# scp ~/.ssh/authorized_keys node3:/root/.ssh/
    The authenticity of host 'node3 (192.168.236.33)' can't be established.
    RSA key fingerprint is 8e:0e:3f:82:c9:aa:4f:bd:4d:b0:b3:9c:82:a7:b9:86.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node3,192.168.236.33' (RSA) to the list of known hosts.
    root@node3's password: 
    authorized_keys                                                 100% 2400     2.3KB/s   00:00    
    

    接下来,就可以实现免密登录了

    [root@node4 ~]# ssh node1
    Last login: Sat Dec 18 15:24:19 2021 from 192.168.236.1
    [root@node1 ~]# exit
    logout
    Connection to node1 closed.
    [root@node4 ~]# ssh node2
    Last login: Sat Dec 18 16:11:59 2021 from 192.168.236.31
    [root@node2 ~]# exit
    logout
    Connection to node2 closed.
    [root@node4 ~]# ssh node3
    Last login: Sat Dec 18 15:24:47 2021 from 192.168.236.1
    [root@node3 ~]# exit
    logout
    Connection to node3 closed.
    
3. JDK安装环境变量配置

  四台服务器都需要安装JDK,并配置JDK的环境变量。当多台服务器进行同样 *** 作时,可以在xshell下面选中全部xshell,然后在里面输入命令。

  1. 在node1、node2、node3、ndoe4上创建文件/opt/apps

  2. 将 jdk-8u221-linux-x64.rpm 上传到 node1/opt/apps

  3. 将/opt/apps 下的 jdk.rpm scp 到 node2、node3、node4 的对应目录中

    [root@node1 apps]# scp jdk-8u221-linux-x64.rpm node2:/opt/apps
    jdk-8u221-linux-x64.rpm                                         100%  171MB 171.2MB/s   00:01    
    [root@node1 apps]# scp jdk-8u221-linux-x64.rpm node3:/opt/apps
    jdk-8u221-linux-x64.rpm                                         100%  171MB 171.2MB/s   00:01    
    [root@node1 apps]# scp jdk-8u221-linux-x64.rpm node4:/opt/apps
    jdk-8u221-linux-x64.rpm                                         100%  171MB  85.6MB/s   00:02    
    
  4. 在 node1、node2、node3、node4 上安装 jdk 并配置 profile 文件

    [root@node1 apps]# rpm -ivh jdk-8u221-linux-x64.rpm
    
    [root@node2 apps]# rpm -ivh jdk-8u221-linux-x64.rpm
    
    [root@node3 apps]# rpm -ivh jdk-8u221-linux-x64.rpm
    

    在node1上修改环境变量

    [root@node1 apps]# vim /etc/profile
    

    在/etc/profile文件中添加下面内容:

    export JAVA_HOME=/usr/java/default
    export PATH=$PATH:$JAVA_HOME/bin
    

    将 node1 的/etc/profile 拷贝到 node2、node3、node4 上,

    [root@node1 apps]# scp /etc/profile node2:/etc
    profile                                                         100% 1863     1.8KB/s   00:00    
    [root@node1 apps]# scp /etc/profile node3:/etc
    profile                                                         100% 1863     1.8KB/s   00:00    
    [root@node1 apps]# scp /etc/profile node4:/etc
    profile                                                         100% 1863     1.8KB/s   00:00    
    

    在四台服务器上分别执行**.** /etc/profile以及查看jps

    [root@node4 apps]# source /etc/profile
    [root@node4 apps]# jps
    1433 Jps
    
4. ZooKeeper 集群搭建
  1. 将 ZooKeeper.tar.gz 上传到 node2:/opt/apps(node1就可以关闭了)

  2. 将 ZooKeeper.tar.gz 解压到/opt

    [root@node2 apps]# tar -zxvf ZooKeeper-3.4.6.tar.gz -C /opt
    
  3. 配置环境变量,在/etc/profile中添加与修改:

    [root@node2 apps]# vim /etc/profile
    
    export ZOOKEEPER_HOME=/opt/zookeeper-3.4.6
    export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin
    

    然后./etc/profile 让配置生效

    最后将该文件 scp 到 node3 和 node4 上,并分别./etc/profile 让配置生效。

    [root@node2 zookeeper-3.4.6]# scp /etc/profile node3:/etc/
    profile                                                         100% 1928     1.9KB/s   00:00    
    [root@node2 zookeeper-3.4.6]# scp /etc/profile node4:/etc/
    
    profile                                                         100% 1928     1.9KB/s   00:00   
    
  4. 到/opt/zookeeper-3.4.6/conf 下,复制 zoo_sample.cfg 为 zoo.cfg

    [root@node2 zookeeper-3.4.6]# cd conf/
    [root@node2 conf]# ll
    total 12
    -rw-rw-r-- 1 1000 1000  535 Feb 20  2014 configuration.xsl
    -rw-rw-r-- 1 1000 1000 2161 Feb 20  2014 log4j.properties
    -rw-rw-r-- 1 1000 1000  922 Feb 20  2014 zoo_sample.cfg
    [root@node2 conf]# cp zoo_sample.cfg zoo.cfg
    [root@node2 conf]# ll
    total 16
    -rw-rw-r-- 1 1000 1000  535 Feb 20  2014 configuration.xsl
    -rw-rw-r-- 1 1000 1000 2161 Feb 20  2014 log4j.properties
    -rw-r--r-- 1 root root  922 Dec 18 17:36 zoo.cfg
    -rw-rw-r-- 1 1000 1000  922 Feb 20  2014 zoo_sample.cfg
    
  5. 编辑zoo.cfg

    [root@node2 conf]# vim zoo.cfg 
    
    apps  zookeeper-3.4.6
    
    
    tickTime=2000    
    initLimit=10    
    syncLimit=5       
    dataDir=/opt/zookeeper-3.4.6/data       
    dataLogDir=/var/bjsxt/zookeeper/datalog 
    clientPort=2181     
    
    
    server.1=node2:2881:3881
    server.2=node3:2881:3881
    server.3=node4:2881:3881
    # 如果在后面添加observer,则表示对应节点不参与投票,是观察者
    #server.3=node4:2881:3881 observer
    

    创建数据目录与日志目录

    [root@node2 conf]# cd /opt/zookeeper-3.4.6/
    [root@node2 zookeeper-3.4.6]# mkdir data
    
    或者
    
    [root@node2 zookeeper-3.4.6]# mkdir -p /var/bjsxt/zookeeper/datalog
    

    参数说明:

    • clientPort:客户端连接 ZooKeeper 服务器的端口,ZooKeeper 会监听这个端口,接受客户端的访问请求。
    • initLimit: 这个配置项是用来配置 ZooKeeper 接受客户端(这里所说的客户端不是用户 连接 ZooKeeper 服务器的客户端,而是 ZooKeeper 服务器集群中连接到 Leader 的 Follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 5 个心 跳的时间(也就是 tickTime)长度后 ZooKeeper 服务器还没有收到客户端的返回信息, 那么表明这个客户端连接失败。总的时间长度就是 5*2000=10 秒
    • tickTime :发送心跳的间隔时间,单位是毫秒
    • syncLimit:这个配置项标识 Leader 与 Follower 之间发送消息,请求和应答时间长度, 最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 2*2000=4 秒
    • server.A=B:C:D:其 中 A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 ZooKeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号
  6. 在/opt/ZooKeeper-3.4.6/data 目录下放一个文件:myid ,在 myid 中写下当前 ZooKeeper 的编号

    [root@node2 data]# echo 1 > myid
    [root@node2 data]# cat myid 
    1
    
  7. 在node2上,将配置好 ZooKeeper 拷贝到 node3、node4 上

    [root@node2 opt]# scp -r zookeeper-3.4.6/ node3:/opt/
    
    [root@node2 opt]# scp -r zookeeper-3.4.6/ node4:/opt/
    
  8. 在 node3 和 node4 上分别修改:myid

    node3:

    [root@node3 apps]# echo 2 > /opt/zookeeper-3.4.6/data/myidvim
    [root@node3 apps]# cat /opt/zookeeper-3.4.6/data/myid
    2
    

    node4:

    [root@node4 apps]# echo 3 > /opt/zookeeper-3.4.6/data/myid
    [root@node4 apps]# cat /opt/zookeeper-3.4.6/data/myid
    3
    
  9. 在 node3 和 node4 上分别创建日志目录

    [root@node3 apps]# mkdir -p /var/bjsxt/zookeeper/datalog
    
    [root@node4 apps]# mkdir -p /var/bjsxt/zookeeper/datalog
    
  10. 在node2、node3、node4上面分别启动ZooKeeper

    node2:

    [root@node2 data]# cd /opt/zookeeper-3.4.6/bin/
    
    [root@node2 bin]# zkServer.sh start
    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    

    node3:

    [root@node3 bin]# zkServer.sh start
    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@node3 bin]# zkServer.sh status
    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Mode: leader
    

    node4:

    [root@node4 data]# cd /opt/zookeeper-3.4.6/bin/
    [root@node4 bin]# zkServer.sh start
    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@node4 bin]# zkServer.sh status
    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Mode: follower
    

    此时,node2为follower

    [root@node2 bin]# zkServer.sh status
    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Mode: follower
    

    从以上可以看出:如果是顺序启动,则超过一半的那台机器为Master,在它后面开启的是follower。

    如果同时启动三台服务器,则会发现:node4为Master。

  11. 在任意一台机器上,使用zkCli.sh来连接ZooKeeper

    [root@node2 bin]# zkCli.sh
    Connecting to localhost:2181
    2021-12-18 20:37:19,489 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
    2021-12-18 20:37:19,491 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=node2
    2021-12-18 20:37:19,491 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_221
    2021-12-18 20:37:19,492 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_221-amd64/jre
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/opt/zookeeper-3.4.6/bin/../build/classes:/opt/zookeeper-3.4.6/bin/../build/lib/*.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/opt/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.6/bin/../conf:
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2021-12-18 20:37:19,493 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/opt/zookeeper-3.4.6/bin
    2021-12-18 20:37:19,494 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@68de145
    Welcome to ZooKeeper!
    JLine support is enabled
    2021-12-18 20:37:19,516 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    2021-12-18 20:37:19,570 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@852] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2021-12-18 20:37:19,605 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x17dcd8476b40000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    

    通过ls /就可以查看节点里的内容

    ls /
    [zookeeper]
    [zk: localhost:2181(CONNECTED) 1] 
    

    通过quit来退出zkCli.sh 命令

    [zk: localhost:2181(CONNECTED) 1] quit
    Quitting...
    2021-12-18 20:40:16,025 [myid:] - INFO  [main:ZooKeeper@684] - Session: 0x17dcd8476b40000 closed
    2021-12-18 20:40:16,025 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@512] - EventThread 
    

    zookeeper分布式集群就已经安装好了!!!

注意:

  • ZooKeeper启动、停止、查看状态

    zkServer.sh start 
    zkServer.sh stop 
    zkServer.sh status 
    
  • 连接ZooKeeper

    zkCli.sh
    
  • 退出zkCli.sh 命令

    quit
    
  • 同时启动与顺序启动,由于事务ID相同,选取Master的方式不同

    1 2 3 (同时启动)

    1 2 3 (逐一启动)

  • 事务ID大的为Master

    3 3 4 事务 id(zxid)大的当领导
    x17dcd8476b40000 closed
    2021-12-18 20:40:16,025 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@512] - EventThread

    zookeeper分布式集群就已经安装好了!!!
    
    

注意:

  • ZooKeeper启动、停止、查看状态

    zkServer.sh start 
    zkServer.sh stop 
    zkServer.sh status 
    
  • 连接ZooKeeper

    zkCli.sh
    
  • 退出zkCli.sh 命令

    quit
    
  • 同时启动与顺序启动,由于事务ID相同,选取Master的方式不同

    1 2 3 (同时启动)

    1 2 3 (逐一启动)

  • 事务ID大的为Master

    3 3 4 事务 id(zxid)大的当领导

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5678740.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存