说到服务的高可用性,我们在上一篇文章中已经介绍了很多,在linux下实现的方式也有很多。这里主要介绍Centos7+LVS+Keepalived实现Nginx服务的高可用。详情见下文:
环境介绍
主机名:Nginx01
IP:192.168.6.10
角色:Nginx服务器
主机名:Nginx02
IP:192.168.6.11
角色:Nginx服务器
主机名:LVS01
IP:192.168.6.11
角色:LVS+保持活力
主机名:LVS02
IP:192.168.6.12
角色:LVS+保持活力
贵宾:192.168.6.15
首先,我们准备安装和配置Nginx。我们需要做到以下几点
systemctl stop firewalld systemctl disable firewalld hostnamectl set-hostname Nginx01关闭selinux配置
vim /etc/selinx/config SELINUX=Disabled yum install wget yum install nginx所以要先安装依赖型回购。
http://nginx.org/en/linux_packages.html
yum install http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm yum install -y nginx完整安装
rpm -qa | grep nginx find / -name nginx安装后,我们将尝试访问以下Nginx服务,
访问的前提是我们需要启动nginx服务。
systemctl start nginx接下来,我们使用web来访问。
我们先来看看nginx.conf配置文件。
vim /etc/nginx/nginx.conf然后查看nginx默认访问页面的显示信息。
/etc/naginx/conf.d/defautlt.conf为了更好的显示,接下来我们要定义一个显示内容;默认网页路径
/usr/share/nginx/html/index.html vim index.html修改显示内容
接下来,我们重新启动服务。
systemctl restart nginx我们还需要为Nginx02安装nginxrepo。
yum install http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm安装完成后,我们会在nginx上为Nginx01做一个修改后的index.html的副本,当然也可以直接用vim来修改Nginx02上的index.html文件。
scp /usr/share/nginx/html/index.html 10.10.1.5:/etc/share/nginx/html/index.html接下来,让我们回到Nginxserver02,修改index.html文件。
vim /usr/share/nginx/html/index.html在修改、保存并退出,然后重启服务后,我们尝试通过web访问。
systemctl restart nginx接下来,我们安装并配置LVS+Keepalived。
首先,我们安装ipvsadm。
yum install -y ipvsadm完整安装
接下来,安装keepalive
第一个是必要条件。
yum install -y gcc openssl openssl-devel然后安装keepalived。
yum install keepalived完整安装
接下来,我们将制作keepalived.conf的备份副本,所以我建议每个人都这样做。
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak让我们首先检查默认的keepalived.conf配置文件的内容。
的默认保持激活配置
! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.16 192.168.200.17 192.168.200.18 } } virtual_server 192.168.200.100 443 { delay_loop 6 lb_algo rr lb_kind NAT nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 192.168.201.100 443 { weight 1 SSL_GET { url { path / digest ff20ad2481f97b1754ef3e12ecd3a9cc } url { path /mrtg/ digest 9b3a0c85a887a256d6939da88aabd8cd } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 10.10.10.2 1358 { delay_loop 6 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP sorry_server 192.168.200.200 1358 real_server 192.168.200.2 1358 { weight 1 HTTP_GET { url { path /testurl/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } url { path /testurl2/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } url { path /testurl3/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.200.3 1358 { weight 1 HTTP_GET { url { path /testurl/test.jsp digest 640205b7b0fc66c1ea91c463fac6334c } url { path /testurl2/test.jsp digest 640205b7b0fc66c1ea91c463fac6334c } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 10.10.10.3 1358 { delay_loop 3 lb_algo rr lb_kind NAT nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 192.168.200.4 1358 { weight 1 HTTP_GET { url { path /testurl/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } url { path /testurl2/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } url { path /testurl3/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.200.5 1358 { weight 1 HTTP_GET { url { path /testurl/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } url { path /testurl2/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } url { path /testurl3/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }清除空keepalived.conf配置文件的内容
echo >/etc/keepalived/keepalived.conf粘贴以下内容
! Configuration File for keepalived global_defs { router_id lvs_clu_1 } virrp_sync_group Prox { group { mail } } vrrp_instance mail { state MASTER interface eth0 lvs_sync_daemon_interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.6.15 } } virtual_server 192.168.6.15 80 { delay_loop 6 lb_algo wlc lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.6.10 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.6.11 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }打开路由转发
cat /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/ip_forward systemctl start keepalived然后查看服务。
ipvsadm接下来,我们将安装LVS02的IPVSADM+Keepalived服务。
yum install -y ipvsadm yum install -y gcc openssl openssl-devel yum install keepalived因为我们是两台LVS服务器,所以需要使用以下命令将LVS01指定目录的配置文件(keepalived.conf)复制到LVS02(10.10.1.7)中,并在目标服务器上覆盖(keepalived.conf)。
scp /etc/keepalived/keepalived.conf 192.168.6.13:/etc/keepalived/keepalived.conf也在LVS02上启用转发路由。
cat /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/ip_forward修改LVS02的keepalive信息
! Configuration File for keepalived global_defs { router_id lvs_clu_1 } virrp_sync_group Prox { group { mail } } vrrp_instance mail { state BACKUP interface eth0 lvs_sync_daemon_interface eth0 virtual_router_id 50 priority 50 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.6.15 } } virtual_server 192.168.6.15 80 { delay_loop 6 lb_algo wlc lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.6.10 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.6.11 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }然后启动keepalived服务。
systemctl start keepalived最后,我们需要检查keepalived的运行状态。
systemct status keepalived我们发现了一个错误。
Tail -f /var/log/message Ip a show Vim keepalived 修改网卡配置 eth0--->修改为eth016777984然后重启keepalived。
Systemctl restart keepalived Systemctl status keepalived最后,我们需要在两台Nginx主机上配置realserver配置。
vim realserver #!/bin/bash # chkconfig: 2345 85 35 # Description: Start real server with host boot VIP=192.168.6.15 function start() { ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP echo 1 >/proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 >/proc/sys/net/ipv4/conf/lo/arp_announce echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce echo “Real Server $(uname -n) started” } function stop() { ifconfig lo:0 down ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP echo 0 >/proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 >/proc/sys/net/ipv4/conf/lo/arp_announce echo 0 >/proc/sys/net/ipv4/conf/all/arp_ignore echo 0 >/proc/sys/net/ipv4/conf/all/arp_announce echo “Real Server $(uname -n) stopped” } case $1 in start) start ;; stop) stop ;; *) echo “Usage: $0 {start|stop}” exit 1 esac chmod a+x realserver systemctl start realserver ./realserver start scp realserver 192.168.6.11:/root/realserver ls -l ./realserver start Ipvsadm systemctl status keepalived 查看lvs02的keepalived的状态 r然后 Ipvsadm -l ip addr然后我们尝试使用vip进行访问。
192.168.6.15
接下来我们将禁用nginx02的nginx服务,尝试继续使用vip进行访问。
Systemctl stop nginx然后检查ipvsadm状态。
Systemctl status keepalived然后我们继续用vip来访问。
最后,我们启动nginx02的nginx服务,然后检查ipvsadm和keepalived的状态。
keepalived最后,我们试图再次访问。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)