Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have a Celeron 400 box with a Promise fastrack ata 100 controller added in, with a WD1200JB and some 40gb maxtor drive.
The network throughput of 100mbps is way slower than the HDD can go, and I was thinking it would be cool to have two NIC's in there (two FA311's as it would happen to be) to allow load balancing to occur.
This would be a file server at a LAN party, and would likely get hit up by two or three people at a time doing reasonably large transfers like game patches. Different categories would be on different drives.
I would be using samba to do the serving, but I wanted to know if I would see benefit from using load balancing.
Also, if anybody could post a link to a howto that they found particularly helpful, that would be great. Thanks!
Distribution: OpenBSD 4.6, OS X 10.6.2, CentOS 4 & 5
Basically you would have to bind the NICs together, which is fairly tricky. I believe their is a HOW-TO for it, but I don't have the URL off the top of my head. In summary, it's possible but extremely tricky. What you could do is put each NIC on a separate network and hang two switches off of it, then have different machines plug into each switch (load balance the machines between switches) so you would have the benefit of multiple NICs.
You should even be able to host a game server with the above and have all the clients on both switches join in, as long as you turned on proxy-arp.
Trunking is a protocol used to transfer several networks on the same physical wire.
This is more the opposite of what your trying to do. Trunking makes it ables to have several VLAN to communicate through the same media between two buildings for example.
This may also be used to let one server with only one NIC serve several VLAN. Without trunking you would have to setup your server with one NIC per VLAN.
I'm reading the article in the kernel-2.6 documentation on bonding, and it seems to make sense, but I'm a little confused about configuring the card using the boot scripts, and what needs to be done switch side.
it doesn't seem like those snippets would work in the slackware ethernet configuration scheme. Is there any howto for this where somebody has used slackware 9.1? The inet configuration is kinda new, and I'm a newbie.
ok well if anyone has setup a bridge they will find this fairly easy.
Firstly you should do some network analysis to see if the network it's self is not the bottle neck. But if all data is comming from the server then you should be safe. and yes the switch needs to support trunking or channel merging.
so, if all is good time to set it up. Install both network cards and test both. then fire up xconfig for your kernel and see if bonding is enabled. if not recompile with it.
so now create a bond interface. this will take the place of eth0 and you can put it in /etc/ modules.conf so it loads on boot. so append
alias bond0 bonding
ok now give it a mac address, just copy eth0
ifconfig bond0 hw ether bla bla bla
and an ip
ifconfig bond0 ip.addy.bla.bla
next you have to add eth0 and eth1 to the bonded device, just like a bridge
ifenslave bond0 eth0
ifenslave bond0 eth1
ow you may have to install ifenslave but mandrake has is preinstalled for me
now fire up a few workstations and transfer a few gig, then monitor eth0, eth1, bond0 and see what the data is doing. if all goes to plan bon0 will have twice the data as eth0 and eth1.
Also this has a very good advantage, while you are transferring the data, pull one of the network cables and watch your network function with out glitch.
well being a student and all i have never done this, but i have researched it for my thesis. here is some text of interest from kernel.org
7. Which switches/systems does it work with?
In round-robin mode, it works with systems that support trunking:
* Cisco 5500 series (look for EtherChannel support).
* SunTrunking software.
* Alteon AceDirector switches / WebOS (use Trunks).
* BayStack Switches (trunks must be explicitly configured). Stackable
models (450) can define trunks between ports on different physical
* Linux bonding, of course !
In Active-backup mode, it should work with any Layer-II switches.