Error[8]: Undefined offset: 46, File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 121
File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 473, decode(

Vmware部署Nginx+KeepAlived集群双主架构的问题及解决方法 前言

用nginx做负载均衡,作为架构的最前端或中间层,随着日益增长的访问量,需要给负载均衡做高可用架构,利用keepalived解决单点风险,一旦 nginx宕机能快速切换到备份服务器。

Vmware网络配置可能遇到的问题解决方法 安装

节点部署

节点 地址 服务 centos7_1 192.168.211.130 Keepalived+Nginx centos7_2 192.168.211.131 Keepalived+Nginx centos7_3 192.168.211.132 Redis服务器 web1(物理机) 192.168.211.128 FastApi+Celery web2(物理机) 192.168.211.129 FastApi+Celery web的配置

web1启动python http服务器

vim index.html

<html>
<body>
<h1>Web Svr 1</h1>
</body>
</html>

nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &

web2启动python http服务器

vim index.html

<html>
<body>
<h1>Web Svr 2</h1>
</body>
</html>

nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &

关闭防火墙

firewall-cmd --state
systemctl stop firewalld.service
systemctl disable firewalld.service

现在浏览器访问就正常了,页面显示Web Svr 1 和 2

centos1和2安装Nginx

首先配置阿里云的源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

安装依赖包

yum -y install gcc
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl openssl-devel

下载nginx,并解压

wget http://nginx.org/download/nginx-1.8.0.tar.gz
tar -zxvf nginx-1.8.0.tar.gz

安装nginx

cd nginx-1.8.0
./configure --user=nobody --group=nobody --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-http_ssl_module
make
make install
cd /usr/local/nginx/sbin/
# 检查配置文件
./nginx -t
# 启动nginx
./nginx

开放nginx访问

firewall-cmd --zone=public --add-port=80/tcp --permanent
systemctl restart firewalld.service

此时访问130和131都可以看到nginx的首页。

创建nginx启动文件

需要在init.d文件夹中创建nginx启动文件。 这样每次服务器重新启动init进程都会自动启动Nginx。

cd /etc/init.d/
vim nginx

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemin
#
# chkconfig:   - 85 15
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# pidfile:     /var/run/nginx.pid
# user:        nginx

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"

lockfile=/var/run/nginx.lock

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest || return $?
    stop
    start
}

reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "" in
    start)
        rh_status_q && exit 0
        
        ;;
    stop)
        rh_status_q || exit 0
        
        ;;
    restart|configtest)
        
        ;;
    reload)
        rh_status_q || exit 7
        
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: 
chkconfig --add nginx
chkconfig --level 345 nginx on
{start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

校验配置文件依次输入下列命令

chmod +x nginx 
ls

functions  netconsole  network  nginx  README

给这个文件添加执行权限

service nginx start
service nginx status
service nginx reload

启动Nginx服务

cd /usr/local/nginx/conf/
mv nginx.conf nginx.conf.bak
egrep -v '^#' nginx.conf.bak
egrep -v '^#|^[ ]*#' nginx.conf.bak
egrep -v '^#|^[ ]*#|^$' nginx.conf.bak 
egrep -v '^#|^[ ]*#|^$' nginx.conf.bak >> nginx.conf
cat nginx.conf

Nginx反向代理、负载均衡(centos_1)

修改nginx.conf配置文件,去除注释的代码

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

输出如下

# 测试配置文件是否正常
../sbin/nginx -t
# 重新加载nginx配置
../sbin/nginx -s reload

重新加载nginx配置

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    
    # websvr 服务器集群(也可以叫负载均衡池)	
    upstream websvr {
        server 192.168.211.128:8001  weight=1;
        server 192.168.211.129:8001  weight=2;
    }
	
    server {
        listen       80;
        # 用来指定ip地址或者域名,多个配置之间用空格分隔
        server_name  192.168.211.130;
        location / {
            # 将所有请求交给websvr集群去处理
            proxy_pass http://websvr;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

配置nginx反向代理、负载均衡

sbin/nginx -s reload

现在重启nginx

upstream websvr

websvr名称可自定义,可以指明这些服务器的含义。也就是只需要添加proxy_pass131就可以实现负载均衡。

现在访问130,页面上就会出现Web Svr 1和Web Svr 2切换,会根据权重选择服务器,weight值越大,权重越高,也就是重复刷新该页面,平均Web Svr 2出现2次,Web Svr 1出现1次。

到目前为止,仍然不能实现高可用,虽然web服务可以这样做,单点故障可以通过这种方式处理,但是如果nginx服务故障了,整个系统基本就无法访问了,那么就需要使用多台Nginx来保障。

多个Nginx协同工作,Nginx高可用【双机主从模式】

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

        upstream websvr {
        server 192.168.211.128:8001  weight=1;
        server 192.168.211.129:8001  weight=2;
    }

    server {
        listen       80;
        server_name  192.168.211.131;
        location / {
            proxy_pass http://websvr;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

# 重新加载nginx
sbin/nginx -s reload
服务器(centos_2)上新增一台nginx服务,和之前的配置一样,只需要修改 nginx.conf 即可

yum install keepalived pcre-devel  -y

现在访问 http://192.168.211.130/ 也可以得到和 http://192.168.211.131/ 类似的结果。

这两台Nginx服务器的IP是不同的,那怎么做才能将这两台nginx服务器一起工作呢?这就需要用到keepalived了。

安装软件,两台centos同时安装

cp /etc/keepalived/keepalived.conf keepalived.conf.bak

配置keepalived

两台均备份

centos_1

Keepalived-MASTER配置

[root@localhost keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
    script_user root
	enable_script_security
}

vrrp_script chk_nginx {
    # 指定监控脚本,检测nginx服务是否正常运行
    script "/etc/keepalived/chk_nginx.sh"
    # 指定监控时间,每10s执行一次
    interval 10
    # 脚本结果导致的优先级变更,检测失败(脚本返回非0)则优先级 -5
    # weight -5
    # # 检测连续2次失败才算确定是真失败。会用weight减少优先级(1-255之间)
    # fall 2
    # 检测1次成功就算成功。但不修改优先级
    # rise 1
}

vrrp_instance VI_1 {
	# 指定keepalived的角色,主机设置为MASTER,备用机设置为BACKUP
    state BACKUP
	# 指定HA监测网络的接口。centos7使用 ip addr 获取
    interface ens33
	# 主备的virtual_router_id必须一样,可以设置为IP后一组:must be between 1 & 255
    virtual_router_id 51
	# 优先级值,在同一个vrrp_instance下, MASTRE 一定要高于 BAUCKUP,MASTER恢复后,BACKUP自动交接
    priority 90
	# VRRP 广播周期秒数,如果没检测到该广播,就被认为服务挂了,将切换主备
    advert_int 1
	# 设置验证类型和密码。主从必须一样
    authentication {
		# 设置vrrp验证类型,主要有PASS和AH两种
        auth_type PASS
		# 加密的密码,两台服务器一定要一样,才能正常通信
        auth_pass 1111
    }
	track_script {
        # 执行监控的服务,引用VRRP脚本,即在 vrrp_script 部分指定的名字。定期运行它们来改变优先级
        chk_nginx
    }
    virtual_ipaddress {
		# VRRP HA 虚拟地址 如果有多个VIP,继续换行填写
        192.168.211.140
    }
}

131

把配置文件发送到

scp /etc/keepalived/keppalived.conf 192.168.211.131:/etc/keepalived/keepalived.conf
节点

131

对于

state BACKUP
priority 90
节点只需要修改

vi /etc/keepalived/chk_nginx.sh

#!/bin/bash
# 查看是否有 nginx进程 把值赋给变量counter
counter=`ps -C nginx --no-header |wc -l`
# 如果没有进程值得为 0
if [ $counter -eq 0 ];then
    # 尝试启动nginx
    echo "Keepalived Info: Try to start nginx" >> /var/log/messages
    /etc/nginx/sbin/nginx
    sleep 3
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        # 输出日至道系统消息
        echo "Keepalived Info: Unable to start nginx" >> /var/log/messages
        # 如果还没没启动,则结束 keepalived 进程
        # killall keepalived
        # 或者停止
        /etc/init.d/keepalived stop
        exit 1
    else
        echo "Keepalived Info: Nginx service has been restored" >> /var/log/messages
        exit 0
    fi
else
    # 状态正常
    echo "Keepalived Info: Nginx detection is normal" >> /var/log/messages;
    exit 0
fi

主keepalived配置监控脚本chk_nginx.sh

创建一个脚本,用于在keepalived中执行

chmod +x chk_nginx.sh
./chk_nginx.sh

接下来授予执行权限,并测试

systemctl restart keepalived
systemctl status keepalived

两边重启keepalived

.140

此时访问

tail -f /var/log/messages 

# 如果nginx关闭
Keepalived Info: Try to start nginx
Keepalived Info: Nginx service has been restored
# nginx正常打开
Keepalived Info: Nginx detection is normal
也是可以正常显示的,也就是绑定的IP成功了。执行前可以通过下面命令实时查看 messages 中的输出日志

[+++]

当nginx检测正常,就会返回0;检测没有了,返回1,但是keepalived似乎不是检测这个返回值来实现转移,而是检测keepalived服务是否存在,来释放本地VIP后,最终转移虚拟IP,到另一台服务器。

参考文章

https://www.jianshu.com/p/7e8e61d34960
https://www.cnblogs.com/zhangxingeng/p/10721083.html

到此这篇关于Vmware部署Nginx+KeepAlived集群双主架构的文章就介绍到这了,更多相关Nginx+KeepAlived集群内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

)
File: /www/wwwroot/outofmemory.cn/tmp/route_read.php, Line: 126, InsideLink()
File: /www/wwwroot/outofmemory.cn/tmp/index.inc.php, Line: 165, include(/www/wwwroot/outofmemory.cn/tmp/route_read.php)
File: /www/wwwroot/outofmemory.cn/index.php, Line: 30, include(/www/wwwroot/outofmemory.cn/tmp/index.inc.php)
Vmware部署Nginx+KeepAlived集群双主架构的问题及解决方法_软件运维_内存溢出

Vmware部署Nginx+KeepAlived集群双主架构的问题及解决方法

Vmware部署Nginx+KeepAlived集群双主架构的问题及解决方法,第1张

Vmware部署Nginx+KeepAlived集群双主架构的问题及解决方法 前言

用nginx做负载均衡,作为架构的最前端或中间层,随着日益增长的访问量,需要给负载均衡做高可用架构,利用keepalived解决单点风险,一旦 nginx宕机能快速切换到备份服务器。

Vmware网络配置可能遇到的问题解决方法
  • 启动VMware DHCP ServiceVMware NAT Service两个服务
  • 在网络适配器开启网络共享,允许其他网络打勾保存,重启虚拟机
安装

节点部署

节点 地址 服务 centos7_1 192.168.211.130 Keepalived+Nginx centos7_2 192.168.211.131 Keepalived+Nginx centos7_3 192.168.211.132 Redis服务器 web1(物理机) 192.168.211.128 FastApi+Celery web2(物理机) 192.168.211.129 FastApi+Celery web的配置

web1启动python http服务器

vim index.html

<html>
<body>
<h1>Web Svr 1</h1>
</body>
</html>

nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &

web2启动python http服务器

vim index.html

<html>
<body>
<h1>Web Svr 2</h1>
</body>
</html>

nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &

关闭防火墙

firewall-cmd --state
systemctl stop firewalld.service
systemctl disable firewalld.service

现在浏览器访问就正常了,页面显示Web Svr 1 和 2

centos1和2安装Nginx

首先配置阿里云的源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

安装依赖包

yum -y install gcc
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl openssl-devel

下载nginx,并解压

wget http://nginx.org/download/nginx-1.8.0.tar.gz
tar -zxvf nginx-1.8.0.tar.gz

安装nginx

cd nginx-1.8.0
./configure --user=nobody --group=nobody --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-http_ssl_module
make
make install
cd /usr/local/nginx/sbin/
# 检查配置文件
./nginx -t
# 启动nginx
./nginx

开放nginx访问

firewall-cmd --zone=public --add-port=80/tcp --permanent
systemctl restart firewalld.service

此时访问130和131都可以看到nginx的首页。

创建nginx启动文件

需要在init.d文件夹中创建nginx启动文件。 这样每次服务器重新启动init进程都会自动启动Nginx。

cd /etc/init.d/
vim nginx

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemin
#
# chkconfig:   - 85 15
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# pidfile:     /var/run/nginx.pid
# user:        nginx

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"

lockfile=/var/run/nginx.lock

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest || return $?
    stop
    start
}

reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "" in
    start)
        rh_status_q && exit 0
        
        ;;
    stop)
        rh_status_q || exit 0
        
        ;;
    restart|configtest)
        
        ;;
    reload)
        rh_status_q || exit 7
        
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: 
chkconfig --add nginx
chkconfig --level 345 nginx on
{start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

校验配置文件依次输入下列命令

chmod +x nginx 
ls

functions  netconsole  network  nginx  README

给这个文件添加执行权限

service nginx start
service nginx status
service nginx reload

启动Nginx服务

cd /usr/local/nginx/conf/
mv nginx.conf nginx.conf.bak
egrep -v '^#' nginx.conf.bak
egrep -v '^#|^[ ]*#' nginx.conf.bak
egrep -v '^#|^[ ]*#|^$' nginx.conf.bak 
egrep -v '^#|^[ ]*#|^$' nginx.conf.bak >> nginx.conf
cat nginx.conf

Nginx反向代理、负载均衡(centos_1)

修改nginx.conf配置文件,去除注释的代码

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

输出如下

# 测试配置文件是否正常
../sbin/nginx -t
# 重新加载nginx配置
../sbin/nginx -s reload

重新加载nginx配置

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    
    # websvr 服务器集群(也可以叫负载均衡池)	
    upstream websvr {
        server 192.168.211.128:8001  weight=1;
        server 192.168.211.129:8001  weight=2;
    }
	
    server {
        listen       80;
        # 用来指定ip地址或者域名,多个配置之间用空格分隔
        server_name  192.168.211.130;
        location / {
            # 将所有请求交给websvr集群去处理
            proxy_pass http://websvr;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

配置nginx反向代理、负载均衡

sbin/nginx -s reload

现在重启nginx

upstream websvr

websvr名称可自定义,可以指明这些服务器的含义。也就是只需要添加proxy_pass131就可以实现负载均衡。

现在访问130,页面上就会出现Web Svr 1和Web Svr 2切换,会根据权重选择服务器,weight值越大,权重越高,也就是重复刷新该页面,平均Web Svr 2出现2次,Web Svr 1出现1次。

到目前为止,仍然不能实现高可用,虽然web服务可以这样做,单点故障可以通过这种方式处理,但是如果nginx服务故障了,整个系统基本就无法访问了,那么就需要使用多台Nginx来保障。

多个Nginx协同工作,Nginx高可用【双机主从模式】

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

        upstream websvr {
        server 192.168.211.128:8001  weight=1;
        server 192.168.211.129:8001  weight=2;
    }

    server {
        listen       80;
        server_name  192.168.211.131;
        location / {
            proxy_pass http://websvr;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

# 重新加载nginx
sbin/nginx -s reload
服务器(centos_2)上新增一台nginx服务,和之前的配置一样,只需要修改 nginx.conf 即可

yum install keepalived pcre-devel  -y

现在访问 http://192.168.211.130/ 也可以得到和 http://192.168.211.131/ 类似的结果。

这两台Nginx服务器的IP是不同的,那怎么做才能将这两台nginx服务器一起工作呢?这就需要用到keepalived了。

安装软件,两台centos同时安装

cp /etc/keepalived/keepalived.conf keepalived.conf.bak

配置keepalived

两台均备份

centos_1

Keepalived-MASTER配置

[root@localhost keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
    script_user root
	enable_script_security
}

vrrp_script chk_nginx {
    # 指定监控脚本,检测nginx服务是否正常运行
    script "/etc/keepalived/chk_nginx.sh"
    # 指定监控时间,每10s执行一次
    interval 10
    # 脚本结果导致的优先级变更,检测失败(脚本返回非0)则优先级 -5
    # weight -5
    # # 检测连续2次失败才算确定是真失败。会用weight减少优先级(1-255之间)
    # fall 2
    # 检测1次成功就算成功。但不修改优先级
    # rise 1
}

vrrp_instance VI_1 {
	# 指定keepalived的角色,主机设置为MASTER,备用机设置为BACKUP
    state BACKUP
	# 指定HA监测网络的接口。centos7使用 ip addr 获取
    interface ens33
	# 主备的virtual_router_id必须一样,可以设置为IP后一组:must be between 1 & 255
    virtual_router_id 51
	# 优先级值,在同一个vrrp_instance下, MASTRE 一定要高于 BAUCKUP,MASTER恢复后,BACKUP自动交接
    priority 90
	# VRRP 广播周期秒数,如果没检测到该广播,就被认为服务挂了,将切换主备
    advert_int 1
	# 设置验证类型和密码。主从必须一样
    authentication {
		# 设置vrrp验证类型,主要有PASS和AH两种
        auth_type PASS
		# 加密的密码,两台服务器一定要一样,才能正常通信
        auth_pass 1111
    }
	track_script {
        # 执行监控的服务,引用VRRP脚本,即在 vrrp_script 部分指定的名字。定期运行它们来改变优先级
        chk_nginx
    }
    virtual_ipaddress {
		# VRRP HA 虚拟地址 如果有多个VIP,继续换行填写
        192.168.211.140
    }
}

131

把配置文件发送到

scp /etc/keepalived/keppalived.conf 192.168.211.131:/etc/keepalived/keepalived.conf
节点

131

对于

state BACKUP
priority 90
节点只需要修改

vi /etc/keepalived/chk_nginx.sh

#!/bin/bash
# 查看是否有 nginx进程 把值赋给变量counter
counter=`ps -C nginx --no-header |wc -l`
# 如果没有进程值得为 0
if [ $counter -eq 0 ];then
    # 尝试启动nginx
    echo "Keepalived Info: Try to start nginx" >> /var/log/messages
    /etc/nginx/sbin/nginx
    sleep 3
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        # 输出日至道系统消息
        echo "Keepalived Info: Unable to start nginx" >> /var/log/messages
        # 如果还没没启动,则结束 keepalived 进程
        # killall keepalived
        # 或者停止
        /etc/init.d/keepalived stop
        exit 1
    else
        echo "Keepalived Info: Nginx service has been restored" >> /var/log/messages
        exit 0
    fi
else
    # 状态正常
    echo "Keepalived Info: Nginx detection is normal" >> /var/log/messages;
    exit 0
fi

主keepalived配置监控脚本chk_nginx.sh

创建一个脚本,用于在keepalived中执行

chmod +x chk_nginx.sh
./chk_nginx.sh

接下来授予执行权限,并测试

systemctl restart keepalived
systemctl status keepalived

两边重启keepalived

.140

此时访问

tail -f /var/log/messages 

# 如果nginx关闭
Keepalived Info: Try to start nginx
Keepalived Info: Nginx service has been restored
# nginx正常打开
Keepalived Info: Nginx detection is normal
也是可以正常显示的,也就是绑定的IP成功了。执行前可以通过下面命令实时查看 messages 中的输出日志

当nginx检测正常,就会返回0;检测没有了,返回1,但是keepalived似乎不是检测这个返回值来实现转移,而是检测keepalived服务是否存在,来释放本地VIP后,最终转移虚拟IP,到另一台服务器。

参考文章

https://www.jianshu.com/p/7e8e61d34960
https://www.cnblogs.com/zhangxingeng/p/10721083.html

到此这篇关于Vmware部署Nginx+KeepAlived集群双主架构的文章就介绍到这了,更多相关Nginx+KeepAlived集群内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/897016.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-15
下一篇 2022-05-15

发表评论

登录后才能评论

评论列表(0条)

保存