Quad interface bonding - mode 1 and 2g load balancing
Hello,
So I am a network admin, and have a team that works on our Linux deployments. Something that came up, was them attempting to introduce redundancy in their server deployment, active-passive, mode 1 configuration. However, they have load concerns, and are sometime peeking past 1gbit of traffic.
Google has failed me, and I honestly don't believe they are investigating this as deeply as I am, which I will explain later my motivations.
Their current deployment has eth0 and eth1 in either a mode 0 configuration, or potentially mode 4 for bond0 facing switch1. the same configuration on eth2 and eth3 for bond1 facing switch2.
The present deployment design, has failure of a switch resulting in pointing stuff to another IP, as they have have different IP configuration on them.
One of their thoughts, was to bond the bonded interfaces, which thus far everything i have been able to find doesnt say that it work, or that it doesnt work. So the first question, which i feel the answer is a no, is can you bond, bond0 and bond1 into bond2, where bond0 and 1 use mode 0 or 4, and bond2 uses mode 1?
Proceeding with my assumption that this cannot be done, is there a bond configuration mode that would allow for 4 interfaces to be added to it, so that the 2 interfaces facing switch1 could be utilized for an increase in throughput, while the 2 facing switch2 are also used to increase throughput, however remain in a passive state until either switch1 fails, is it is manually failed over?
The network hardware is older, and cross chassis ether-channels is not supported (otherwise this would be much simpler).
**********
now for the motivation aspect, so skip this if you dont care.
**********
We attempted to configure some options on this already, and this resulted during the rollback, of the 2 of us getting out of sync, and and the resulted steps being done too soon, or not soon enough. Regardless of the order of things happening in, we ended up creating a spanning-tree loop through the server, causing the Catalysts 6500s to spike to 100% CPU usage, spanning-tree failing back and forth, as well as HSRP going into split brain mode, so on and so forth. Although this isnt the main subject of this thread, my curiosity has me googling like a mad man trying to determine what configuration the server would have had, combined with what configuration the Catalyst would have had to have to create this scenario. Unfortunately it was something we could investigate at the time, as all of production went down, and we needed to act quickly to resolve the incident. So if anyone has had experience with this scenario (doesnt have to involve 4 interface, 2 interface bonding that resulted in accidental bridging of 2 switches would likely suffice).
So i have hopefully provided enough information that someone can answer the bonding question on how to get the 4 interfaces working in throughput/active-passive design. And if anyone knows how we created the bridging scenario, then i would be even more grateful.
Sorry for the long post, and thanks for taking the time to read it.
Meeks
|