# touch /etc/sysconfig/network-scripts/ifcfg-bond0 新建一个bond0配置文件
# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 BOOTPROTO=static IPADDR=1.1.1.2
NETMASK=255.255.255.0
BROADCAST=1.1.1.255
NETWORK=1.1.1.0
GATEWAY=1.1.1.1
ONBOOT=yes
TYPE=Ethernet
编辑ifcfg-bond0如上
第二步:修改/etc/sysconfig/network-scripts/ifcfg-ethX
这个实验中把网卡1和2绑定,修改/etc/sysconfig/network-scripts/ifcfg-ethX相应网卡配置如下:
# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
DEVICE=eth1
HWADDR=00:d0:f8:40:f1:a0 网卡1mac
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
# cat /etc/sysconfig/network-scripts/ifcfg-eth2
TYPE=Ethernet DEVICE=eth2
HWADDR=00:d0:f8:00:0c:0c 网卡2mac
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
第三步:配置/etc/modprobe.conf,添加alias bond0 bonding
# cat /etc/modprobe.conf alias eth0 e100
alias snd-card-0 snd-intel8x0
options snd-card-0 index=0
options snd-intel8x0 index=0
remove snd-intel8x0 { /usr/sbin/alsactl store 0 >/dev/null 2>&1 || : }/sbin/modprobe -r --ignore-remove snd-intel8x0 alias eth1 8139too
options 3c501 irq=3
alias eth2 tulip
上面是三网卡本身的配置如果要绑定和做lacp只要再加上下面两条配置
alias bond0 bonding 绑定
options bond0 miimon=100 mode=4 mode=4是lacp
第四步:配置/etc/rc.d/rc.local,添加需要绑定的网卡
# cat /etc/rc.d/rc.local
touch /var/lock/subsys/local 配置本身就有这条命令
ifenslave bond0 eth1 eth2 这条命令是添加需要绑定的网卡1和2
到这里就完成bonding的配置了可以查看一下
第五步:重启网络服务和重启pc
#service network restart 重启网络服务
# shutdown -r now 重启pc
重启后可以查看bonding情况:网卡1和2 都绑定上了,模式为802.3ad
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.3 (March 23, 2006)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0) MII Status: up
MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0
802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 1
Partner Mac Address: 00:d0:f8:22:33:ba Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:d0:f8:40:f1:a0
Aggregator ID: 1
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:d0:f8:00:0c:0c
Aggregator ID: 1
接口配置信息:新增了bond0的配置信息,接口bond0和eth1,eth2,绑定后三个接口使用的mac都是同一个:00:D0:F8:40:F1:A0 # ifconfig
bond0 Link encap:Ethernet HWaddr 00:D0:F8:40:F1:A0
inet addr:1.1.1.2 Bcast:1.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::2d0:f8ff:fe40:f1a0/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:128 errors:0 dropped:0 overruns:0 frame:0
TX packets:259 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15466 (15.1 KiB) TX bytes:39679 (38.7 KiB)
eth0 Link encap:Ethernet HWaddr 00:11:11:EB:71:E2
inet addr:192.168.180.8 Bcast:192.168.180.15 Mask:255.255.255.240
inet6 addr: fe80::211:11ff:feeb:71e2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:311 errors:0 dropped:0 overruns:0 frame:0
TX packets:228 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:30565 (29.8 KiB) TX bytes:35958 (35.1 KiB) eth1
Link encap:Ethernet HWaddr 00:D0:F8:40:F1:A0
inet6 addr: fe80::2d0:f8ff:fe40:f1a0/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:54 errors:0 dropped:0 overruns:0 frame:0
TX packets:97 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6696 (6.5 KiB) TX bytes:13821 (13.4 KiB)
Interrupt:209 Base address:0x2e00
eth2 Link encap:Ethernet HWaddr 00:D0:F8:40:F1:A0
inet6 addr: fe80::2d0:f8ff:fe40:f1a0/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:74 errors:0 dropped:0 overruns:0 frame:0
TX packets:162 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8770 (8.5 KiB) TX bytes:25858 (25.2 KiB)
Interrupt:201 Base address:0x2f00
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6283 errors:0 dropped:0 overruns:0 frame:0
TX packets:6283 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9783674 (9.3 MiB) TX bytes:9783674 (9.3 MiB)
(二)锐捷交换机配置:
lacp system-priority 100 全局配置lacp优先级
interface GigabitEthernet 0/23
no switchport
lacp port-priority 100 接口的lacp优先级
port-group 1 mode active 接口下开启lacp 主动模式
interface GigabitEthernet 0/24
no switchport
lacp port-priority 100
port-group 1 mode active
interface AggregatePort 1
no switchport no ip proxy-arp
ip address 1.1.1.1 255.255.255.0
和linux成功建立lacp后状态信息如下:
Show lacp summary
System Id:100, 00d0.f822.33ba
Flags: S - Device is requesting Slow LACPDUs F - Device is requesting Fast LACPDUs. A - Device is in active mode. P - Device is in passive mode. Aggregate port 1:
Local information:
LACP port Oper Port Port
Port Flags State Priority Key Number State ----------------------------------------------------------------------
Gi0/23 SA bndl 100 0x1 0x17 0x3d
Gi0/24 SA bndl 100 0x1 0x18 0x3d
Partner information:
LACP port Oper Port Port
Port Flags Priority Dev ID Key Number State ---------------------------------------------------------------------
Gi0/23 SA 255 00d0.f840.f1a0 0x9 0x2 0x3d
Gi0/24 SA 255 00d0.f840.f1a0 0x9 0x1 0x3d
State表示状态信息:bndl表示lacp建立成功,sup表示不成功。
建立成功后在交换机上去ping linux 1.1.1.2
Ruijie#ping 1.1.1.2
Sending 5, 100-byte ICMP Echoes to 1.1.1.2, timeout is 2 seconds: <press Ctrl+C to break >!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms 在linux上ping交换机
[root@localhost ~]# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.601 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.606 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.608 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=0.607 ms
--- 1.1.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.601/0.605/0.608/0.024 ms
[root@localhost ~]#
可以ping通,lacp建立正常。
把原传输数据的那个网卡shut down异常测试时,需要等到lacp状态超时才能切换到另一条链路。
Ruijie#sh lacp summary
System Id:100, 00d0.f822.33ba
Flags: S - Device is requesting Slow LACPDUs F - Device is requesting Fast LACPDUs. A - Device is in active mode. P - Device is in passive mode. Aggregate port 1:
Local information:
LACP port Oper Port Port
Port Flags State Priority Key Number State ----------------------------------------------------------------------
Gi0/23 SA sups 100 0x1 0x17 0x45
Gi0/24 SA bndl 100 0x1 0x18 0x3d
Partner information:
LACP port Oper Port Port
Port Flags Priority Dev ID Key Number State ---------------------------------------------------------------------
Gi0/23 SP 0 0000.0000.0000 0x0 0x0 0x0
Gi0/24 SA 255 00d0.f840.f1a0 0x9 0x1 0x3d
本文主要解决3个问题:
第一、链路聚合的定义和作用是什么?
第二、如何配置链路聚合?
第三、链路聚合的实际应用场景有那些?
第一、链路聚合的定义和作用是什么?
答:链路聚合的定义:链路聚合,官方称聚合链接,民间又称网卡组队,具体指的是将多个网卡绑定在一起组建一个虚拟网卡,外界与虚拟网卡进行通信,虚拟网卡再将信息进行分发;
链路聚合的作用:可以实现轮询式的流量负载均衡和热备份的作用;
举个栗子:
链路聚合就好比是一个包工头,这个包工头为了多赚钱,多接订单,肯定需要找多个小弟;
这样就可以保障,万一有一个小弟感冒了,不能上班,这时有其他小弟可以顶上;
当客户需要盖房子的时候,直接找包工头就好了,不需要一个一个的去找建筑工人;
第二、如何配置链路聚合?
答:
1、配置链路聚合的命令是:
nmcli connection add type team con-name team0 ifname team0 autoconnect yes config '{"runner": {"name": "activebackup"}}'
译为:nmcli connection 添加 类型 team(组队)
配置文件名 team0 网卡名 team0 每次开机自动启用
配置运行模式 热备份模式
整体译为:为系统网卡添加一个 team (团队),团队名称叫 team0 ,配置文件也叫 team0 , 并且设置为开机自动启动,配置运行模式为热备份模式;
2、为链路聚合添加成员的命令是:
nmcli connection add type team-slave con-name team0-1 ifname eth1 master team0
nmcli connection add type team-slave con-name team0-2 ifname eth2 master team0
注释:nmcli connection 添加类型为 team的成员
配置文件名 team0-1 网卡为 eth1 主设备为 team0
整体译为:为主设备team0添加两张网卡,eth1和eth2;
3、为tem0配置ip地址的命令是:
nmcli connection modify team0 ipv4.method manual ipv4.addresses
“IP 地址 / 子网掩码” connection.autoconnect yes
4、激活team0的命令是:
nmcli connection up team0
第三、链路聚合的实际应用场景有那些?
答:当服务器提供比较重要的服务时,只准备一张网卡是远远不够的,因为一但网卡出现故障,客户就无法访问,这就会造成客户流失,体验感差;
这个时候就可以运用链路聚合的方法来解决,将多张网卡绑定在一起创建一张虚拟网卡,从而实现网卡热备份,流量轮询式负载均衡;
以此来保障服务器能够正常提供服务,给用户以良好的体验;
注意事项:
在创建虚拟网卡和添加成员时,如果命令敲错了,一定要删除错误的信息,以免造成通信混乱;
删除的命令是:nmcli connection delete team0 (team0或team x)
查看team0的信息命令是: teamdctl team0 state
以上.......
(本篇完)
祝:开心!
罗贵
2019-03-24
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)