Bonding Links Together for increased bandwidth?
Hi!
I'm working on setting up some bonded or "trunked" links between two of my servers. The hardware is:
- IBM x366 Server (4-way Xeon 3.6 GHz, dual Broadcom GigE NICS, SLES 9)
- Home Built Intel Server (2-way Xeon 2.8 GHz, dual Intel GigE NICS, FC4)
- Switch: Dell PowerConnect 5212 w/ latest firmware
I've read the bonding docs from the Kernel source, but can't seem to make this work. I've configured my switch to truck the two ports from each server together (ie - the IBM server is trunk #1 and the Intel is trunk #2). On each server I set up a bonded interface like so:
ifdown eth0
ifdown eth1
modprobe bonding mode=0
ifconfig bond0 <ip address> netmask <netmask> up
ifenslave bond0 eth0
ifenslave bond0 eth1
Once I've done this on both machines, I'm able to ping between the servers (even if I unplug one of the connections) which is good. However, I can't get any increased bandwidth... NetPIPE measures about 800 Mbps between the servers whether I'm using a single normal link or the trunked link. I'm sure I'm missing something obvious here but can't seem to track it down. Can anyone offer some help?
Thanks in Advance
Mark
|