求助linuxRedis单机多节点集群实验报错

求助linuxRedis单机多节点集群实验报错,第1张

第一步:安装Redis

前面已经安装过了 不解释,

Reids安装包里有个集群工具,要复制到/usr/local/bin里去

cp redis-3.2.9/src/redis-trib.rb /usr/local/bin

第二步:修改配置,创建节点

我们现在要搞六个节点,三主三从,

端口规定分别是7001,7002,7003,7004,7005,7006

我们先在root目录下新建一个redis_cluster目录,然后该目录下再创建6个目录,

分别是7001,7002,7003,7004,7005,7006,用来存在redis配置文件;

这里我们要使用redis集群,要先修改redis的配置文件redis.conf

mkdir redis_cluster 新建目录

[root@localhost ~]# cd redis_cluster/

[root@localhost redis_cluster]# mkdir 7001 7002 7003 7004 7005 7006

[root@localhost redis_cluster]# ll

总用量 0

drwxr-xr-x. 2 root root 6 7月 27 17:18 7001

drwxr-xr-x. 2 root root 6 7月 27 17:18 7002

drwxr-xr-x. 2 root root 6 7月 27 17:18 7003

drwxr-xr-x. 2 root root 6 7月 27 17:18 7004

drwxr-xr-x. 2 root root 6 7月 27 17:18 7005

drwxr-xr-x. 2 root root 6 7月 27 17:18 7006

[root@localhost redis_cluster]#

先复制一份配置文件到7001目录下

[root@localhost redis_cluster]# cd

[root@localhost ~]# cp redis-3.2.9/redis.conf redis_cluster/7001/

我们修改下这个配置文件

vi redis_cluster/7001/redis.conf

修改一下几个

port 7001 //六个节点配置文件分别是7001-7006

daemonize yes//redis后台运行

pidfile /var/run/redis_7001.pid //pidfile文件对应7001-7006

cluster-enabled yes //开启集群

cluster-config-file nodes_7001.conf //保存节点配置,自动创建,自动更新对应7001-7006

cluster-node-timeout 5000//集群超时时间,节点超过这个时间没反应就断定是宕机

appendonly yes //存储方式,aof,将写 *** 作记录保存到日志中

7001下的修改完后,我们把7001下的配置分别复制到7002-7006 然后对应的再修改下配置即可;

[root@localhost ~]# cp redis_cluster/7001/redis.conf redis_cluster/7002/

[root@localhost ~]# cp redis_cluster/7001/redis.conf redis_cluster/7003/

[root@localhost ~]# cp redis_cluster/7001/redis.conf redis_cluster/7004/

[root@localhost ~]# cp redis_cluster/7001/redis.conf redis_cluster/7005/

[root@localhost ~]# cp redis_cluster/7001/redis.conf redis_cluster/7006/

[root@localhost ~]# vi redis_cluster/7002/redis.conf

[root@localhost ~]# vi redis_cluster/7003/redis.conf

[root@localhost ~]# vi redis_cluster/7004/redis.conf

[root@localhost ~]# vi redis_cluster/7005/redis.conf

[root@localhost ~]# vi redis_cluster/7006/redis.conf

编辑后面5个配置文件,把 port ,pidfile,cluster-config-file 分别修改下即可;

第三步:启动六个节点的redis

[root@localhost ~]# /usr/local/redis/bin/redis-server redis_cluster/7001/redis.conf

[root@localhost ~]# /usr/local/redis/bin/redis-server redis_cluster/7002/redis.conf

[root@localhost ~]# /usr/local/redis/bin/redis-server redis_cluster/7003/redis.conf

[root@localhost ~]# /usr/local/redis/bin/redis-server redis_cluster/7004/redis.conf

[root@localhost ~]# /usr/local/redis/bin/redis-server redis_cluster/7005/redis.conf

[root@localhost ~]# /usr/local/redis/bin/redis-server redis_cluster/7006/redis.conf

启动六个节点

[root@localhost ~]# ps -ef | grep redis

查找下redis进程

root 9501 1 0 17:38 ?00:00:00 /usr/local/redis/bin/redis-server 127.0.0.1:7001 [cluster]

root 9512 1 0 17:45 ?00:00:00 /usr/local/redis/bin/redis-server 127.0.0.1:7002 [cluster]

root 9516 1 0 17:45 ?00:00:00 /usr/local/redis/bin/redis-server 127.0.0.1:7003 [cluster]

root 9520 1 0 17:45 ?00:00:00 /usr/local/redis/bin/redis-server 127.0.0.1:7004 [cluster]

root 9524 1 0 17:45 ?00:00:00 /usr/local/redis/bin/redis-server 127.0.0.1:7005 [cluster]

root 9528 1 0 17:45 ?00:00:00 /usr/local/redis/bin/redis-server 127.0.0.1:7006 [cluster]

说明都启动成功了

第四步:创建集群

redis官方提供了redis-trib.rb工具,第一步里已经房到里bin下 ;

但是在使用之前 需要安装ruby,以及redis和ruby连接

yum -y install ruby ruby-devel rubygems rpm-build

gem install redis

redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

创建集群

[root@localhost ~]# redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

>>>Creating cluster

>>>Performing hash slots allocation on 6 nodes...

Using 3 masters:

127.0.0.1:7001

127.0.0.1:7002

127.0.0.1:7003

Adding replica 127.0.0.1:7004 to 127.0.0.1:7001

Adding replica 127.0.0.1:7005 to 127.0.0.1:7002

Adding replica 127.0.0.1:7006 to 127.0.0.1:7003

M: bfcfcdc304b011023fa568e044ea23ea6bc03c3c 127.0.0.1:7001

slots:0-5460 (5461 slots) master

M: d61e66e49e669b99d801f22f6461172696fdd1c9 127.0.0.1:7002

slots:5461-10922 (5462 slots) master

M: aa6bc3f1e1174c3a991c01882584707c2408ec18 127.0.0.1:7003

slots:10923-16383 (5461 slots) master

S: 7908a60306333c5d7c7c5e7ffef44bdf947ef0a4 127.0.0.1:7004

replicates bfcfcdc304b011023fa568e044ea23ea6bc03c3c

S: 1d2341fd3b79ef0fccb8e3a052bba141337c6cdd 127.0.0.1:7005

replicates d61e66e49e669b99d801f22f6461172696fdd1c9

S: f25b35f208dc96605ee4660994d2ac52f39ac870 127.0.0.1:7006

replicates aa6bc3f1e1174c3a991c01882584707c2408ec18

Can I set the above configuration? (type 'yes' to accept):

从运行结果看 主节点就是7001 7002 7003 从节点分别是7004 7005 7006

7001分配到的哈希槽是 0-5460

7002分配到的哈希槽是 5461-10922

7003分配到的哈希槽是 10923-16383

最后问我们是否接受上面的设置,输入yes 就表示接受,我们输入yes

然后显示:

>>>Nodes configuration updated

>>>Assign a different config epoch to each node

>>>Sending CLUSTER MEET messages to join the cluster

Waiting for the cluster to join......

>>>Performing Cluster Check (using node 127.0.0.1:7001)

M: bfcfcdc304b011023fa568e044ea23ea6bc03c3c 127.0.0.1:7001

slots:0-5460 (5461 slots) master

1 additional replica(s)

S: f25b35f208dc96605ee4660994d2ac52f39ac870 127.0.0.1:7006

slots: (0 slots) slave

replicates aa6bc3f1e1174c3a991c01882584707c2408ec18

M: d61e66e49e669b99d801f22f6461172696fdd1c9 127.0.0.1:7002

slots:5461-10922 (5462 slots) master

1 additional replica(s)

S: 1d2341fd3b79ef0fccb8e3a052bba141337c6cdd 127.0.0.1:7005

slots: (0 slots) slave

replicates d61e66e49e669b99d801f22f6461172696fdd1c9

M: aa6bc3f1e1174c3a991c01882584707c2408ec18 127.0.0.1:7003

slots:10923-16383 (5461 slots) master

1 additional replica(s)

S: 7908a60306333c5d7c7c5e7ffef44bdf947ef0a4 127.0.0.1:7004

slots: (0 slots) slave

replicates bfcfcdc304b011023fa568e044ea23ea6bc03c3c

[OK] All nodes agree about slots configuration.

>>>Check for open slots...

>>>Check slots coverage...

[OK] All 16384 slots covered.

显示配置哈希槽,以及集群创建成功,可以用了;

第五步:集群数据测试

我们先连接任意一个节点,然后添加一个key:

redis-cli是redis默认的客户端工具,启动时加上`-c`参数,`-p`指定端口,就可以连接到集群。

连接任意一个节点端口:

[root@localhost ~]# /usr/local/redis/bin/redis-cli -c -p 7002

127.0.0.1:7002>

我们连接7002

127.0.0.1:7002>set xxx 'fdafda'

->Redirected to slot [4038] located at 127.0.0.1:7001

OK

前面说过Redis Cluster值分配规则,所以分配key的时候,它会使用CRC16(‘my_name’)%16384算法,来计算,将这个key 放到哪个节点,这里分配到了4038slot 就分配到了7001(0-5460)这个节点上。所以有:

Redirected to slot [4038] located at 127.0.0.1:7001

我们从其他集群节点 ,都可以获取到数据

127.0.0.1:7001>exit

[root@localhost ~]# /usr/local/redis/bin/redis-cli -c -p 7005

127.0.0.1:7005>get xxx

->Redirected to slot [4038] located at 127.0.0.1:7001

"fdafda"

127.0.0.1:7001>

第六步:集群宕机测试

假如我们干掉一个节点,比如7002 这个主节点

[root@localhost ~]# ps -ef | grep redis

root 9501 1 0 17:38 ?00:00:02 /usr/local/redis/bin/redis-server 127.0.0.1:7001 [cluster]

root 9512 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7002 [cluster]

root 9516 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7003 [cluster]

root 9520 1 0 17:45 ?00:00:02 /usr/local/redis/bin/redis-server 127.0.0.1:7004 [cluster]

root 9524 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7005 [cluster]

root 9528 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7006 [cluster]

root 9601 2186 0 18:12 pts/000:00:00 grep --color=auto redis

[root@localhost ~]# kill -9 9512

[root@localhost ~]# ps -ef | grep redis

root 9501 1 0 17:38 ?00:00:02 /usr/local/redis/bin/redis-server 127.0.0.1:7001 [cluster]

root 9516 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7003 [cluster]

root 9520 1 0 17:45 ?00:00:02 /usr/local/redis/bin/redis-server 127.0.0.1:7004 [cluster]

root 9524 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7005 [cluster]

root 9528 1 0 17:45 ?00:00:01 /usr/local/redis/bin/redis-server 127.0.0.1:7006 [cluster]

root 9603 2186 0 18:12 pts/000:00:00 grep --color=auto redis

[root@localhost ~]#

然后再来看下集群的情况

redis-trib.rb check 127.0.0.1:7001

>>>Performing Cluster Check (using node 127.0.0.1:7001)

M: bfcfcdc304b011023fa568e044ea23ea6bc03c3c 127.0.0.1:7001

slots:0-5460 (5461 slots) master

1 additional replica(s)

S: f25b35f208dc96605ee4660994d2ac52f39ac870 127.0.0.1:7006

slots: (0 slots) slave

replicates aa6bc3f1e1174c3a991c01882584707c2408ec18

M: 1d2341fd3b79ef0fccb8e3a052bba141337c6cdd 127.0.0.1:7005

slots:5461-10922 (5462 slots) master

0 additional replica(s)

M: aa6bc3f1e1174c3a991c01882584707c2408ec18 127.0.0.1:7003

slots:10923-16383 (5461 slots) master

1 additional replica(s)

S: 7908a60306333c5d7c7c5e7ffef44bdf947ef0a4 127.0.0.1:7004

slots: (0 slots) slave

replicates bfcfcdc304b011023fa568e044ea23ea6bc03c3c

[OK] All nodes agree about slots configuration.

>>>Check for open slots...

>>>Check slots coverage...

[OK] All 16384 slots covered.

我们发现 7005本来是从节点,由于他对应的主节点挂了,就自动变成主节点master,所有会有最后一个说明

All 16384 slots covered. 所有哈希槽都可覆盖了; 集群可以正常使用;

假如我们把7005也干掉,试试看

[root@localhost ~]# kill -9 9524

[root@localhost ~]# ps -ef | grep redis

root 9501 1 0 17:38 ?00:00:03 /usr/local/redis/bin/redis-server 127.0.0.1:7001 [cluster]

root 9516 1 0 17:45 ?00:00:02 /usr/local/redis/bin/redis-server 127.0.0.1:7003 [cluster]

root 9520 1 0 17:45 ?00:00:03 /usr/local/redis/bin/redis-server 127.0.0.1:7004 [cluster]

root 9528 1 0 17:45 ?00:00:02 /usr/local/redis/bin/redis-server 127.0.0.1:7006 [cluster]

root 9610 2186 0 18:16 pts/000:00:00 grep --color=auto redis

[root@localhost ~]#

查看下集群情况

redis-trib.rb check 127.0.0.1:7001

这里我们发现 出事了,因为主从节点都挂了 所以有一部分哈希槽没得分配,最后一句

[ERR] Not all 16384 slots are covered by nodes. 没有安全覆盖;

所以不能正常使用集群;

在Linux中可以使用如下几种方法来查看文件系统,即可以看到文件系统的版本,比如ext4还是ext3。

1. mount

:~$ mount

/dev/sda1 on / type ext4 (rw,errors=remount-ro,user_xattr)

proc on /proc type proc (rw,noexec,nosuid,nodev)

none on /sys type sysfs (rw,noexec,nosuid,nodev)

none on /sys/fs/fuse/connections type fusectl (rw)

none on /sys/kernel/debug type debugfs (rw)

none on /sys/kernel/security type securityfs (rw)

none on /dev type devtmpfs (rw,mode=0755)

none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)

none on /dev/shm type tmpfs (rw,nosuid,nodev)

none on /var/run type tmpfs (rw,nosuid,mode=0755)

none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)

none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)

none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)

none on /proc/fs/vmblock/mountPoint type vmblock (rw)

binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)

gvfs-fuse-daemon on /home/kysnail/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=kysnail)

:~$

2. df

:~$ df -lhT

文件系统 类型容量 已用 可用 已用% 挂载点

/dev/sda1 ext4 19G 11G 7.8G 57% /

none devtmpfs498M 248K 497M 1% /dev

none tmpfs502M 252K 501M 1% /dev/shm

none tmpfs502M 96K 502M 1% /var/run

none tmpfs502M 0 502M 0% /var/lock

none tmpfs502M 0 502M 0% /lib/init/rw

none debugfs 19G 11G 7.8G 57% /var/lib/ureadahead/debugfs

:~$

3. fdisk

:~$ sudo fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

switch off the mode (command 'c') and change display units to

sectors (command 'u').

Command (m for help): c

DOS Compatibility flag is not set

Command (m for help): u

Changing display/entry units to sectors

Command (m for help): p

Disk /dev/sda: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00077544

Device Boot Start End Blocks Id System

/dev/sda1 *20484010598320051968 83 Linux

/dev/sda24010803041940991 9164815 Extended

/dev/sda54010803241940991 916480 82 Linux swap / Solaris

Command (m for help): q

4. file

:~$ sudo file -s /dev/sda

/dev/sda: x86 boot sectorpartition 1: ID=0x83, active, starthead 32, startsector 2048, 40103936 sectorspartition 2: ID=0x5, starthead 254, startsector 40108030, 1832962 sectors, code offset 0x63

kysnail@ubunkysnail:~$ sudo file -s /dev/sda1

/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=4942da40-8a49-4bfd-9dc2-45c906d48413 (needs journal recovery) (extents) (large files) (huge files)

:~$

5. parted

:~$ sudo parted

GNU Parted 2.2

使用 /dev/sda

欢迎使用 GNU Parted! 输入 'help'可获得命令列表.

(parted) p

Model: VMware, VMware Virtual S (scsi)

磁盘 /dev/sda: 21.5GB

Sector size (logical/physical): 512B/512B

分区表:msdos

数字 开始: End 大小类型 文件系统标志

11049kB 20.5GB 20.5GB primary ext4启动

220.5GB 21.5GB 938MB extended

520.5GB 21.5GB 938MB logical linux-swap(v1)

(parted)

6. 查看 fstab

# /etc/fstab: static file system information.

#

# Use 'blkid -o value -s UUID' to print the universally unique identifier

# for a devicethis may be used with UUID= as a more robust way to name

# devices that works even if disks are added and removed. See fstab(5).

#

# <file system><mount point> <type> <options> <dump> <pass>

proc/proc procnodev,noexec,nosuid 0 0

# / was on /dev/sda1 during installation

UUID=4942da40-8a49-4bfd-9dc2-45c906d48413 / ext4errors=remount-ro,user_xattr 0 1

# swap was on /dev/sda5 during installation

UUID=935fb95d-771f-448e-9d23-4820106e1783 noneswapsw 0 0

/dev/fd0/media/floppy0 autorw,user,noauto,exec,utf8 0 0

Linux上的heartbeat双机热备服务架设

【一】 安装前环境设定

两台主机硬件环境(不必完全一致):

CPU: Xeon 3G *2 (EM64T)

MEM: 2G

NIC: Intel 1G *2

eth0: 对外IP

eth1: 对内IP(HA专用)

两台主机的eth1使用双机对联线直接连接。

分区方式:

Filesystem 容量 挂载点

/dev/sda2 9.7G /

/dev/sda6 45G /Datas

/dev/sda1 99M /boot

none2.0G /dev/shm

/dev/sda3 9.7G /opt

另外每台主机应预留500M的raw空间或者更多来作为共用空间被HA使用。

*** 作系统:

RedHat Enterprise 4 Update2 (2.6.9-22 EL)

预安装软件:

@ X Window System

@ GNOME Desktop Environment

@ KDE Desktop Environment

@ Editors

@ Engineering and Scientific

@ Graphical Internet

@ Text-based Internet

@ Authoring and Publishing

@ Server Configuration Tools

@ Development Tools

@ Kernel Development

@ X Software Development

@ GNOME Software Development

@ KDE Software Development

@ Administration Tools

@ System Tools

【二】安装前网络环境设定:

node1: 主机名:servers201 ( HA01 )

eth0: 192.168.10.201 //对外IP地址

eth1: 10.0.0.201 //HA心跳使用地址

node2: 主机名:servers202 ( HA02 )

eth0: 192.168.10.202 //对外IP地址

eth1: 10.0.0.202 //HA心跳使用地址

特别注意要检查以下几个文件:

/etc/hosts

/etc/host.conf

/etc/resolv.conf

/etc/sysconfig/network

/etc/sysconfig/network-scripts/ifcfg-eth0

/etc/sysconfig/network-scripts/ifcfg-eth1

/etc/nsswitch.conf

#vi /etc/hosts

node1的hosts内容如下:

127.0.0.1 localhost.localdomain localhost

192.168.10.201 servers201 HA01

10.0.0.201 HA01

10.0.0.202 HA02

192.168.10.202 server202

node2的hosts内容如下:

127.0.0.1 localhost.localdomain localhost

192.168.10.202 servers202 HA02

10.0.0.202 HA02

10.0.0.201 HA01

192.168.10.201 server201

#cat /etc/host.conf

order hosts,bind

#cat /etc/resolv.conf

nameserver 61.139.2.69 //DNS地址

#cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=servers201 //主机名

GATEWAY="192.168.10.1" //网关

GATEWAY="eth0" //网关使用网卡

ONBOOT=YES //启动时加载

FORWARD_IPV4="yes" //只允许IPV4

#cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.10.201

NETMASK=255.255.255.0

GATEWAY=192.168.10.1

TYPE=Ethernet

IPV6INIT=no

#cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

ONBOOT=yes

BOOTPROTO=none

IPADDR=10.0.0.201

NETMASK=255.255.255.0

TYPE=Ethernet

[node1] 与 [node2] 在上面的配置中,除了

/etc/hosts

/etc/sysconfig/network

/etc/sysconfig/network-scripts/ifcfg-eth0

/etc/sysconfig/network-scripts/ifcfg-eth1

要各自修改外,其他一致。

配置完成后,试试在各自主机上ping对方的主机名,应该可以ping通:

/root#ping HA02

PING HA02 (10.0.0.202) 56(84) bytes of data.

64 bytes from HA02 (10.0.0.202): icmp_seq=0 ttl=64 time=0.198 ms

64 bytes from HA02 (10.0.0.202): icmp_seq=1 ttl=64 time=0.266 ms

64 bytes from HA02 (10.0.0.202): icmp_seq=2 ttl=64 time=0.148 ms

--- HA02 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2002ms

rtt min/avg/max/mdev = 0.148/0.204/0.266/0.048 ms, pipe 2

【三】安装HA 与HA依赖包

rpm -Uvh libnet-1.1.2.1-1.rh.el.um.1.i386.rpm //可以不装

rpm -Uvh heartbeat-pils-2.0.4-1.el4.i386.rpm

rpm -Uvh heartbeat-stonith-2.0.4-1.el4.i386.rpm

rpm -Uvh heartbeat-2.0.4-1.el4.i386.rpm

rpm -Uvh ipvsadm-1.24-5.i386.rpm

【四】 配置 HA的各配置文件

配置心跳的加密方式:authkeys

#vi /etc/ha.d/authkeys

如果使用双机对联线(双绞线),可以配置如下:

#vi /etc/hc.d/authkeys

auth 1

1 crc

存盘退出,然后

#chmod 600 authkeys

配置心跳的监控:haresources

#vi /etc/ha.d/haresources

各主机这部分应完全相同。

server201 IPaddr::192.168.10.200 ipvsadm httpd

指定 server201调用ipvsadm启动http服务,系统附加一个虚拟IP 192.168.10.200 给eth0:0

这里如果server201宕机后,server202可以自动启动http服务,并新分配IP 192.168.10.200给server202的eth0:0

配置心跳的配置文件:ha.cf

#vi /etc/ha.d/ha.cf

logfile /var/log/ha_log/ha-log.log ## ha的日志文件记录位置。如没有该目录,则需要手动添加

bcast eth1 ##使用eth1做心跳监测

keepalive 2 ##设定心跳(监测)时间时间为2秒

warntime 10

deadtime 30

initdead 120

hopfudge 1

udpport 694 ##使用udp端口694 进行心跳监测

auto_failback on

node server201 ##节点1,必须要与 uname -n 指令得到的结果一致。

node server202 ##节点2

ping 192.168.10.1 ##通过ping 网关来监测心跳是否正常。

respawn hacluster /usr/lib64/heartbeat/ipfail

apiauth ipfail gid=root uid=root

debugfile /Datas/logs/ha_log/ha-debug.log

设置ipvsadm的巡回监测

ipvsadm -A -t 192.168.10.200:80 -s rr

ipvsadm -a -t 192.168.10.200:80 -r 192.168.10.201:80 -m

ipvsadm -a -t 192.168.10.200:80 -r 192.168.10.202:80 -m

执行后进行监测:

#ipvsadm --list

如果返回结果与下相同,则设置正确。

IP Virtual Server version 1.2.0 (size=4096)

Prot LocalAddress:Port Scheduler Flags

->RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.10.200:http rr

->server202:http Local 1 0 0

->server201:http Masq 1 0 0

【五】 HA服务的启动、关闭以及测试

启动HA: service heartbeat start

关闭HAservice heartbeat stop

系统在启动时已经自动把heartbeat 加载了。

使用http服务测试 heartbeat

首先启动httpd服务

#service httpd start

编辑各自主机的测试用html文件,放到/var/www/html/目录下。

启动node1的heartbeat,并执行这个指令进行监控: heartbeat status

【六】 防火墙设置

heartbeat 默认使用udp 694端口进行心跳监测。如果系统有使用iptables 做防火墙,应记住把这个端口打开。

#vi /etc/sysconfig/iptables

加入以下内容

-A RH-Firewall-1-INPUT -p udp -m udp --dport 694 -d 10.0.0.201 -j ACCEPT

意思是udp 694端口对 对方的心跳网卡地址 10.0.0.201 开放。

#service iptables restart

重新加载iptables。


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/8590859.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-04-18
下一篇 2023-04-18

发表评论

登录后才能评论

评论列表(0条)

保存