Linux loadbalanced trunking over different switches
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Linux loadbalanced trunking over different switches
I'm new to this community so as my first post I'd like to take the opportunity to stress how lucky I feel myself to have found you... I'll try to be a constructive member and help any way I can...
That said, my first post would be a hard question which I have been troubling myself over a week now. I found some material on the net, but tha t didn't help me much. So here it goes...
I want to interconnect a number of machines in a loadbalanced, trunked, failure tolerant way. There're two, one gigabit ethernet switches. Each machine has two 1Gbit interfaces. The first port on each machine connects to the first gbit switch, the second to the second switch, respectively.
What I would like to do is to configure the machines like this:
- The bandwith available would be 2Gbit, by combining the speed of ports 1&2 on all machines and the two switches.
- If either one interface fails on machines, or one of the switches dies, that would not break connectivity between machines.
I've been trying to do this on Mandriva 2007 boxes via the bonding driver in the kernel in modes 0 and 6 as I remember... But while the connection itself appears to be working in every case (I mean I can ping the machines) when all interfaces are up, when I disconnect either interface from either switch, the connection between machines dies immediately, and only network restart can reverse the effect... So not quite what I want. I think I'm doing it all wrong, for I have never done this before. So...
Could you please tell me a good way to do the above task (loadbalanced, fault tolerant connection using two switches)? Or can you direct me to a source where it's described, explained in detail?
a lot here depends on how you wish to use the links. you can't use this approach to get downstream data to the box this fast at a basic level (and at any level would be contrived i expect) but to get data out of the box is feasible here. if you use the bond module, mode 3 then packets leaving the box will round robin the functioning nics. that's crude really, but it's what you've asked for.
moving on to the link functioning at all, you're then faced with issues of arp records becoming invalid with a failed link. what are you doing in terms of link monitoring? are you using a valid monitoring ip to arp against on each link? when you do take a link out, does the bond know that it's down?
doing it properly nicely with two switches would require better switches, e.g. cisco 3750's which allow a genuine 802.3ad bond across two seperate(ish) devices.
Last edited by acid_kewpie; 08-26-2007 at 04:03 PM.
I have to confess I start to feel a little stupid about this
I'll try to give you as much info as I can...
Let me clarify a little what I try to do here. So: Ultimately I'm hoping to create a cluster of servers inside which the bandwidth between all member computers is 2gbit/s, and, most importantly, the links are fault tolerant...
The servers will have 2x "NC110T PCIe Gigabit Server Adapter", and the switches will be "HP ProCurve Switch 2626". I'm talking about this in future term, because I'm currently trying to get a working configuration up under VMWare 6 only, for the actual physical machines have not yet arrived.
The ProCurve Switch I believe will support 802.3ad (at least I think they will, based on what I have read last night) - however I could be mistaken. Even with that, I'm totally new to this, so I guess this will not help me so much at where I stand.
How I monitor the links: I read in a FAQ that the bonding driver will be able to monitor them using 'MII'. So I use these options in my /etc/modprobe.conf for bonding:
alias trunk0 bonding
options trunk0 max_bonds=1 miimon=100 use_carrier=1
And content for /etc/sysconfig/network-scripts/ifcfg_trunk0:
...as far as I understand, with this config bonding should notice the link down, because it monitors it using MII. Or... I'm wrong and I don't understand at all
As far as the experiment I do now: For the moment, until the real servers arrive, I use VMWare workstation 6 to simulate the whole stuff. I went ahead and created a "team" of machines with three LAN segments: 1) The actual LAN of my office network (bridged), 2) LAN1 (like primary switch), 3) LAN2 (like secondary switch). I've connected the NICs of the virtual machines to these virtual LANs. And try to move on from there - just to see how this is done, so that when the real thing arrives, I have some idea about what to do to start setting them up.
one thing that people often mix up with bonding and such is that they expect one machine to be able to recieve traffic from a remote machine across both nic's, which (AFAIK) it can't without a supportive switch. the more machines involved in the equation, the better you should be, as some of the algorithms allow the server in question to respond to the arp requests from each machine from a different nic, which goes to allow inbound load balancing to some extent too. if the part you're having trouble with is the fault tolerance side, i'd still be looking at the monitoring part here. check the arp_ip_target options and such for the fault tolerance side of things. also when you are in a failed mode, really have a look around to see what's actually going on.
AFAIK, sure there are switches that can do that : SMLT - split multi link trunk or DMLT distributed multi link trunk.
without doing bridging there is no way that you can make an aggregate interface with different end-point switches. you can do load balance (such as application load balance or doing HA) yes - but for PAGP no.
true that the OP has 2 NIC right?
and he wants to do a fault tolerant - load balanced between those 2 NIC,
but seems to me that what he is refering to more like an advanced PAGP to me like SMLT/DMLT.
bonding 2 or more NIC and split them to 2 switches.
for that SMLT/DMLT cant do without bridging/bridge/switches.
OK - sure you can do a NIC failover between those 2 NIC also like HA, but can you do that with 2 switches? you need 1 more switches to do it for your server - because your server has to be a router if you want to push it that way.
BTW, you did correct about some people have misconception about load-balancing - agreed.
right yeah i think i see where that's heading, but doesn't seem like a likely scenario to end up using... i'm not too familiar with bonding within linux, much more experienced with cisco world stuff, but the modes of the bond module would seem to be slated to do what's being asked, although i'm not that clear on what levels of switch configuration is required for the modes which are not 802.3ad but are apparently able to distribute arp's across multiple links...
Thanks all very much for answering! These are really interesting points to consider
For now it seems that I'll just wait for the stuff to arrive then start experimenting with the switch management tool and try make this crazy idea work. Then I'll give some more feedback on the actual hardware, what happened.
However... I think that at this point I'd be satisfied even if I was able to make a "simple" automatic failover using the two switches - so in case a switch dies or NIC dies, I have no problem. What do you recommend to use for that purpose?
I definitely want to pursue the original idea though. Man, would it be great if we could figure that out...
I'm thinking, maybe Zebra would be a way to go? Like: all machines would have two routes, in case of problem with one route, traffic would be redirected to second route. Do you guys think it makes sense to try and create a network of small routers in this way? (Especially if ultimately the machines will be used to be part in a loadbalanced Apache cluster...)
nope. overkill. don't get layer 3 to do what layer 2 can already do just fine. you could easily implement this outside of zebra anyway, with standard routing commands. you *must* keep the networking on the end systems as simple as humanly possible.