两个IBMx3650M3, *** 作系统CentOS5.9x64,连接到一个IBMDS3400存储。系统底层采用GFS文件系统实现文件共享,数据库是另一个独立的oraclerac集群。这种架构不需要考虑数据库。
GFS文件系统及相关配置请参考上一篇文章中的IBMx3650M3+GFS+IPMIfence生产环境配置示例。这篇文章是上一篇文章的延伸。两台服务器的主机名分别是节点01和节点02。由于应用架构简单,服务器资源有限,通过两台服务器实现双机备份模式的高可用架构。本文来自:http://koumm.blog.51cto.com/
IBMx3650m3+GFS+http://koumm.blog.51cto.com/703525/1544971IPMI围栏生产环境配置示例
该架构如下所示:
1.网络环境及IP地址准备,CentOS5.9x64 1)节点1主机名:node01注意:IBMserver需要将专用的IMM2端口或标有系统MGMT的网络端口连接到交换机,与本地IP地址在同一个网段。
IPMI:10.10.10.85/24
eth1:192.168.233.83/24
eth1:0
IPMI:10.10.10.86/24
eth1:192.168.233.84/24
eth1:0
#卡特彼勒/etc/主机
192.168.233.83node01
192.168.233.84node02
192.168.233.90VIP
10.10.10.85node01_IPMI
10.10.10.86node02_IPMI
为了实现VIP的出现,作为例子使用的VIP地址是192.168.233.90。
1.安装keepalived软件注意:keepalive-1.2.12安装后没有问题。
(1)下载软件包并在node01,node02两个节点上安装 wget http://www.keepalived.org/software/keepalived-1.2.12.tar.gz tar zxvf keepalived-1.2.12.tar.gz cd keepalived-1.2.12 ./configure --prefix=/usr/local/keepalived make && make install cp /usr/local/keepalived/sbin/keepalived /usr/sbin/ cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ mkdir /etc/keepalived 2.创建keepalived配置文件 1)在node01节点一上配置文件修改配置文件,绑定网卡为eth1
注意:从机的优先级与原生IP不同,但其他都是一样的。
# vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { xxx@126.com } notification_email_from service@abc.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 mcast_src_ip 192.168.233.83 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 876543 } virtual_ipaddress { 192.168.233.90 } } 2)在node02节点二上配置文件 # vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { xxx@126.com } notification_email_from service@abc.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 mcast_src_ip 192.168.233.84 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 876543 } virtual_ipaddress { 192.168.233.90 } } 3.在node01,node02两节点上启动与创建keepalived服务 1)启动服务并加为开机启动: service keepalived start chkconfig --add keepalived chkconfig keepalived on 2)测试并观察VIP漂移情况 (1)VIP地址观察主持人:观察VIP地址如下:
注意:您可以关闭keepalived服务,并通过cat/var/log/messages观察VIP动向。
三、HAproxy反向代理配置node01、node02配置的 *** 作
1.添加非本机IP邦定支持 # vi /etc/sysctl.conf net.ipv4.ip_nonlocal_bind = 1 # sysctl –p 2.安装haproxy软件 # tar zxvf haproxy-1.4.25.tar.gz # cd haproxy-1.4.25 # make TARGET=linux26 PREFIX=/usr/local/haproxy # make install PREFIX=/usr/local/haproxy # cd /usr/local/haproxy # mkdir conf 3.安装socat工具 # wget http://www.dest-unreach.org/socat/download/socat-2.0.0-b5.tar.gz # tar zxvf socat-2.0.0-b5.tar.gz # ./configure --disable-fips # make && make install 4.创建配置文件 1)node01上创建配置文件 # vi /usr/local/haproxy/conf/haproxy.cfg global log 127.0.0.1 local0 maxconn 65535 chroot /usr/local/haproxy uid 99 gid 99 stats socket /usr/local/haproxy/HaproxSocket level admin daemon nbproc 1 pidfile /usr/local/haproxy/haproxy.pid #debug defaults log 127.0.0.1 local3 mode http option httplog option httpclose option dontlognull option forwardfor option redispatch retries 2 maxconn 2000 balance source #balance roundrobin stats uri /haproxy-stats contimeout 5000 clitimeout 50000 srvtimeout 50000 listen web_proxy 0.0.0.0:80 mode http option httpchk GET /test.html HTTP/1.0\r\nHost:192.168.233.90 server node01 192.168.233.83:8000 weight 3 check inter 2000 rise 2 fall 1 server node02 192.168.233.84:8000 weight 3 backup check inter 2000 rise 2 fall 1 listen stats_auth 0.0.0.0:91 mode http stats enable stats uri /admin stats realm "Admin console" stats auth admin:123456 stats hide-version stats refresh 10s stats admin if TRUE 2)node02上创建配置文件 # vi /usr/local/haproxy/conf/haproxy.cfg global log 127.0.0.1 local0 maxconn 65535 chroot /usr/local/haproxy uid 99 gid 99 stats socket /usr/local/haproxy/HaproxSocket level admin daemon nbproc 1 pidfile /usr/local/haproxy/haproxy.pid #debug defaults log 127.0.0.1 local3 mode http option httplog option httpclose option dontlognull option forwardfor option redispatch retries 2 maxconn 2000 balance source #balance roundrobin stats uri /haproxy-stats contimeout 5000 clitimeout 50000 srvtimeout 50000 listen web_proxy 0.0.0.0:80 mode http option httpchk GET /test.html HTTP/1.0\r\nHost:192.168.233.90 server node01 192.168.233.83:8000 weight 3 backup check inter 2000 rise 2 fall 1 server node02 192.168.233.84:8000 weight 3 check inter 2000 rise 2 fall 1 listen stats_auth 0.0.0.0:91 mode http stats enable stats uri /admin stats realm "Admin_console" stats auth admin:123456 stats hide-version stats refresh 10s stats admin if TRUE注意:这两个节点是活动和备用模式,它们都被优化为使用本地节点作为活动节点,这也可以是负载平衡模式。
5.node01,node02上配置HAproxy日志文件 1)Haproxy日志配置 # vi /etc/syslog.conf local3.* /var/log/haproxy.log local0.* /var/log/haproxy.log *.info;mail.none;authpriv.none;cron.none;local3.none /var/log/messages注意:第三行删除了/var/log/message中记录haproxy.log日志的功能。
直接手动执行
service syslog restart touch /var/log/haproxy.log chown nobody:nobody /var/log/haproxy.log 注:99默认是nobody用户 chmod u+x /var/log/haproxy.log 2)haproxy日志切割 # vi /root/system/cut_log.sh #!/bin/bash # author: koumm # desc: # date: 2014-08-28 # version: v1.0 # modify: # cut haproxy log if [ -e /var/log/haproxy.log ]; then mv /var/log/haproxy.log /var/log/haproxy.log.bak fi if [ -e /var/log/haproxy.log.bak ]; then logrotate -f /etc/logrotate.conf chown nobody:nobody /var/log/haproxy.log chmod +x /var/log/haproxy.log fi sleep 1 if [ -e /var/log/haproxy.log ]; then rm -rf /var/log/haproxy.log.bak fi注意:root权限执行脚本。
#crontab-e
5923***su-root-c'/root/system/cut_log.sh'
http://192.168.233.85:91/admin
http://192.168.233.83:91/admin
http://192.168.233.84:
在应用程序中配置会话复制
#VI/cluster/zhzxxt/deploy/app.war/web-INF/WEB.XML
在
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> <distributable/> 2.修改集群标识 1)修改集群标识#VI/cluster/JBoss4/server/node01/deploy/JBoss-weB-cluster.sar/META-INF/JBoss-service.XML
#VI/cluster/JBoss4/server/node02/deploy/jboss-web-cluster.sar/META-INF/jboss-service.xml
<;attributename="clustername">;Tomcat-APP-Cluster<;/attribute>;
整个架构配置完成,在测试过程中稳定可靠。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)