Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have two Dell Servers that I would like to setup with Intel Dual Gig Network cards. I would like to bind both network cards in the intel card to work as a single device so that I am able to get volume transfers without having to use fiber. I am also going to be setting up Jumbo Frames since my biggest issue is moving data from this server into another of the same configuration. So the way my network would be setup is two servers on a jumbo frame enable switch using dual network cards in a separate VLAN. They will be downloading about 126GB of data nightly from one server to another.
Please help as I am new to Linux and trying to make the jump from Microsoft.
Distribution: Distribution: RHEL 5 with Pieces of this and that.
Kernel 2.6.23.1, KDE 3.5.8 and KDE 4.0 beta, Plu
Posts: 5,700
Rep:
Not sure if this is what you are after. Channel bonding might work if bonding two nics as one. Only thing confusing me is the mention of seperate vlan. There is a possiblity of running linux and setting it up as vlan type configuration. Never done that but that could work. I have no links to that. Might try www.google.com/linux
Distribution: Distribution: RHEL 5 with Pieces of this and that.
Kernel 2.6.23.1, KDE 3.5.8 and KDE 4.0 beta, Plu
Posts: 5,700
Rep:
Found the thing I seen on vlan. I just built the latest kernel and one of the network drivers for a realtek gigabit card has an option for vlan support. Thats about all I know on that subject.
I have read the articles and we ordered the intel cards. We should go into testing mode this thursday. Our goal is to increase performance on our sql backup from one server to another. By setting up the VLAN we can remove any traffic that its not needed. We hope to decrease our backup window. Whenever I look at the total traffic in the system one server does not seem to be moving enough packets. So my goal first is to create the port bind and next is to increase the MTU size to allow for jumbo frames. I have already enable the jumbo frames option on the switch and created the vlan. So it is up to the testing face now to see if we can get this systems to talk faster with each other.
Distribution: Distribution: RHEL 5 with Pieces of this and that.
Kernel 2.6.23.1, KDE 3.5.8 and KDE 4.0 beta, Plu
Posts: 5,700
Rep:
Sounds like fun. If you get it all setup you might post here what you have down so maybe it could others. List hardware maodel and brand and how it is setup.
I would like to first thank you for the help in regards to those links. We started the test using two similar computers. The systems where Pentium 4 2Ghz and above with 20GB or more EIDE hard drives and Gb network cards. One system run Red Hat 3 Enterprise and the other run Fedora Core 5. Here is the sequence of events:
1. Install dual Intel Pro/1000 MT cards
2. Setup 1 card on network
3. Create NFS share
4. run first test using 5GB file and standard 1GB no MTU modifications.
5. change settings on Dell switch PowerConnect 2716 to support Jumbo Frames on switch using graphical interface. (found error on switch interface that require for me to use old version of Java instead of latest. Discuss with Dell who admited it was an issue).
6. Change MTU on single card and tested multiple file copies. It was determine that the most appropriate MTU was 9100. After setting that up we where able to achieve 1.3 times the speed of transmition 76.82% improvement.
7. Change to dual card port bind mode with standard non mtu changes.
8. Discover that after doing this it was appropriate to revisite the NFS share. It seems that I just had to go into the control panel and open each share and press okay before I can share it again.
9. run test using same 5GB file for base line and got 77.25% improvement.
10. setup MTU changes and prepare for optimal transmittion which after testing multiple times we got about 1.37x improvement.
Our single 5GB file with 1GB took about 3:53 to transfer
Our double with MTU changes took 2:50 to transfer
At the end it was the hard drives and CPU on the computers that really slow the process down. I believe that we might get better performance from our bigger servers running faster hard drives.
The reason why 9100 MTU was use is because NFS only really goes to about 8400 bytes.
I am including one link of a website that does a good job to explain why. There are other places but this one has a very nice way to show you the reasons.
small dash tree dot com slash jumbo6 dot htm (sorry but I have not posted enough to allow me to post URLs)
I will keep you posted on what other results happen when we go life sometime this week. I hope that this article helps anyone else that is trying to accomplish the same project.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.