how to mount nfs for higher data tranfer rate with multiple NIC cards.
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
how to mount nfs for higher data tranfer rate with multiple NIC cards.
hi,
setting up the cluster for an application which requires higher data tranfer among cluster nodes. let me tell you my cluster setup, have 4 working nodes and manager node.
each machine has 2 gigbit and 2 100/10mb nic cards. The manager node has 1.5T of storage which is RAID of 15k rpm disks. All the gigabit ports are connected to gigabit switch, now i wanted to route all data transfer between nodes and manager through gigabit NIC card through NFS. Help me in doing this ..
what are you having problems with here? if you only wish to use gigabit connectivity, then only use the gigabit NIC's and it'll work just fine. if you want to go further than that you can look at channel bonding the gig nics to form a single load balanced bond0 interface, but please make sure you undersatnd the bonding configuration first so as not to think you're getting somethign you aren't. if you want to go further into this, give us more info like what type of switch it actually is.
the switch i am using is dell powerConnect 2724 gigabit switch, but i wanted only data tranfer to be done through gigabit, rest of controlling will be done through 100mbps card. my problem is how do i setup nfs for this ...
well it's none of NFS's business which NIC's are being used. that's down to the routing tables and such. id' still wonder why you'd care about ever using the 100mbps nics... what's the point? if they were on a seperate network then that's one very simple way to achieve your goal, but past that, i think you'd be looking at using iptables to mark the packet based on the ports it's using and such and routing it there, and that's going to be pretty intensive i'd think when it's such high volumes of traffic.
Like Acid said, unless the 100s are on a separate network, pull them. You did not mention what form of Raid you are using. I would test the disks transfer rate before I went to channel bonding. GigE should be able to outrun most (certainly not all) disks. I bonded my 100s together and it was a PITA. I could use mode 1-5 but I never did get mode 6 to work.
Lazlow
Edit: Not all GigE nics are created equal. You can look around for the reviews. As I recall the faster nics could sustain nearly twice the transfer rate of the slower ones(That is from memory from about a year ago).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.