DEVICE=bond0ONBOOT=yesBONDING_OPTS="miimon=100 mode=4 xmit_hash_policy=layer3+4 lacp_rate=1" TYPE=Bond0BOOTPROTO=noneDEVICE=eth0ONBOOT=yesSLAVE=yesMASTER=bond0HOTPLUG=noTYPE=EthernetBOOTPROTO=noneDEVICE=eth1ONBOOT=yesSLAVE=yesMASTER=bond0HOTPLUG=noTYPE=EthernetBOOTPROTO=none
在这里你可以看到绑定状态:
以太网通道绑定驱动程序:v3.6.0(2009年9月26日)
Bonding Mode: IEEE 802.3ad Dynamic link aggregationTransmit Hash Policy: layer3+4 (1)MII Status: upMII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0802.3ad infoLACP rate: fastAggregator selection policy (ad_select): stableActive Aggregator Info: Aggregator ID: 3 Number of ports: 2 Actor Key: 17 Partner Key: 686 Partner Mac Address: d0:67:e5:df:9c:dcSlave Interface: eth0MII Status: upSpeed: 1000 MbpsDuplex: fulllink Failure Count: 0Permanent HW addr: 00:25:90:c9:95:74Aggregator ID: 3Slave queue ID: 0Slave Interface: eth1MII Status: upSpeed: 1000 MbpsDuplex: fulllink Failure Count: 0Permanent HW addr: 00:25:90:c9:95:75Aggregator ID: 3Slave queue ID: 0
Ethtool输出:
Settings for bond0:Supported ports: [ ]Supported link modes: Not reportedSupported pause frame use: NoSupports auto-negotiation: NoAdvertised link modes: Not reportedAdvertised pause frame use: NoAdvertised auto-negotiation: NoSpeed: 2000Mb/sDuplex: FullPort: OtherPHYAD: 0Transceiver: internalauto-negotiation: offlink detected: yesSettings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal auto-negotiation: on MDI-X: UnkNown Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link link detected: yesSettings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal auto-negotiation: on MDI-X: UnkNown Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000007 (7) drv probe link link detected: yes
这些服务器都连接到相同的Dell PCT 7048交换机,每个服务器的两个端口都添加到其自己的动态LAG并设置为访问模式.一切看起来都不错吧?然而,这是从一个服务器到另一个服务器尝试iperf测试的结果,有2个线程:
------------------------------------------------------------ClIEnt connecting to 172.16.8.183,TCP port 5001TCP window size: 85.3 KByte (default)------------------------------------------------------------[ 4] local 172.16.8.180 port 14773 connected with 172.16.8.183 port 5001[ 3] local 172.16.8.180 port 14772 connected with 172.16.8.183 port 5001[ ID] Interval Transfer BanDWIDth[ 4] 0.0-10.0 sec 561 MBytes 471 Mbits/sec[ 3] 0.0-10.0 sec 519 MBytes 434 Mbits/sec[SUM] 0.0-10.0 sec 1.05 GBytes 904 Mbits/sec
很明显,两个端口都在使用,但不是接近1Gbps的端口 –
这是他们在粘合之前单独工作的东西.我已经尝试了各种不同的绑定模式,xmit散列类型,mtu大小等等,但只是不能让各个端口超过500 Mbits / sec …..这几乎就像Bond本身受限了到某地1G!有没有人有任何想法?
增加1/19:感谢您的评论和问题,我将尝试在这里回答它们,因为我仍然对最大化这些服务器的性能非常感兴趣.首先,我清除了戴尔交换机上的接口计数器,然后让它为生产流量提供服务.以下是构成发送服务器绑定的2个接口的计数器:
Port InTotalPkts InUcastPkts InMcastPkts InBcastPkts--------- ---------------- ---------------- ---------------- ----------------Gi1/0/9 63113512 63113440 72 0 Port OutTotalPkts OutUcastPkts OutMcastPkts OutBcastPkts--------- ---------------- ---------------- ---------------- ----------------Gi1/0/9 55453195 55437966 6075 9154 Port InTotalPkts InUcastPkts InMcastPkts InBcastPkts--------- ---------------- ---------------- ---------------- ----------------Gi1/0/44 61904622 61904552 48 22 Port OutTotalPkts OutUcastPkts OutMcastPkts OutBcastPkts--------- ---------------- ---------------- ---------------- ----------------Gi1/0/44 53780693 53747972 48 32673
似乎流量完全负载均衡 – 但是当rx和tx组合时,带宽图仍然显示每个接口几乎正好500mbps:
我还可以肯定地说,当它正在为生产流量提供服务时,它会不断推动更多带宽并同时与多个其他服务器通信.
编辑#2 1/19:Zordache,你让我觉得Iperf测试可能只受到接收端的限制,只使用1个端口而且只有1个接口,所以我同时运行了2个server1实例并运行了“iperf -s”在server2和server3上.然后我在同一时间从服务器1到服务器2和3运行Iperf测试:
iperf -c 172.16.8.182 -P 2------------------------------------------------------------ClIEnt connecting to 172.16.8.182,TCP port 5001TCP window size: 85.3 KByte (default)------------------------------------------------------------[ 4] local 172.16.8.225 port 2239 connected with 172.16.8.182 port 5001[ 3] local 172.16.8.225 port 2238 connected with 172.16.8.182 port 5001[ ID] Interval Transfer BanDWIDth[ 4] 0.0-10.0 sec 234 MBytes 196 Mbits/sec[ 3] 0.0-10.0 sec 232 MBytes 195 Mbits/sec[SUM] 0.0-10.0 sec 466 MBytes 391 Mbits/seciperf -c 172.16.8.183 -P 2------------------------------------------------------------ClIEnt connecting to 172.16.8.183,TCP port 5001TCP window size: 85.3 KByte (default)------------------------------------------------------------[ 3] local 172.16.8.225 port 5565 connected with 172.16.8.183 port 5001[ 4] local 172.16.8.225 port 5566 connected with 172.16.8.183 port 5001[ ID] Interval Transfer BanDWIDth[ 3] 0.0-10.0 sec 287 MBytes 241 Mbits/sec[ 4] 0.0-10.0 sec 292 MBytes 244 Mbits/sec[SUM] 0.0-10.0 sec 579 MBytes 484 Mbits/sec
添加的两个SUM仍然不会超过1Gbps!至于你的另一个问题,我的端口通道只设置了以下两行:
hashing-mode 7switchport access vlan 60
哈希模式7是戴尔的“增强哈希”.它没有具体说明它做了什么,但我尝试了其他6种模式的各种组合,它们是:
Hash Algorithm Type1 - Source MAC,VLAN,EtherType,source module and port ID2 - Destination MAC,source module and port ID3 - Source IP and source TCP/UDP port4 - Destination IP and destination TCP/UDP port5 - Source/Destination MAC,source MODID/port6 - Source/Destination IP and source/destination TCP/UDP port7 - Enhanced hashing mode
如果您有任何建议,我很乐意再次尝试其他模式,或更改我的端口通道上的配置.
解决方法 在计算机上,您的绑定使用哈希策略传输哈希策略:layer3 4,基本上意味着用于给定连接的接口基于ip / port.您的iperf测试在两个系统之间,而iperf使用单个端口.因此,所有iperf流量可能仅限于绑定接口的单个成员.
我不确定你看到的是什么让你认为两个接口都被使用,或者一半接口正在处理. Iperf只报告每个线程的结果.不是每个接口.查看交换机上的接口计数器会更有趣.
你提到过玩不同的哈希模式.由于您要连接到交换机,因此还需要确保更改交换机上的哈希模式.计算机上的配置仅适用于传输的数据包.您还需要在交换机上配置散列模式(如果这甚至是硬件的选项).
在两个系统之间使用时,绑定不是很有用.绑定不会为您提供两个接口的全部带宽,它只是让您使用一个接口的某些连接,而另一些则使用另一个接口.有些模式可以帮助两个系统之间的一点点,但它最多只有25-50%的改进.您几乎永远无法获得两个接口的全部容量.
总结以上是内存溢出为你收集整理的linux – Bonded Gigabit Interfaces的上限约为500mbps全部内容,希望文章能够帮你解决linux – Bonded Gigabit Interfaces的上限约为500mbps所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)