Can I bond two, 1 gig nics to get 2Gbps without needing a special switch?
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Can I bond two, 1 gig nics to get 2Gbps without needing a special switch?
I mean, I know Linux can do this but what hardware do I need? I'm interested in the round robin mode (mode 0 I think?)
I've read conflicting articles on getting this working. After issuing all the commands on both systems to bond their NICs....
... one article said you could use any switch and another stating you had to use a Layer 4 switch.
... and another article said this will only work if the PCs are directly connected to each other.
Both of these computers are on my LAN and I need to transfer about 30+ TB of data from one system to the other and I have to use the network. Going at 1Gbps is gonna take a while. If I could do this at 2Gbps, cut my time in half.
This isn't going to be a common occurrence so I'd rather not buy 2.5Gbps cards and switches.
For binding to provide throughput advantage ALL devices in the path must allow the same kind of binding definition and the desired volume of traffic.
This is why it is easiest with a direct connection between the computers involved, there is no middle device or set of devices to complicate things.
If your switch can accept the same binding as both endpoints then you should get some value out of the binding: otherwise probably not.
I have used something like this for full HA cluster, using nodes that had at least 6 NIC Ether ports each. It was a bit of a pain to set up, a greater pain to administrate and troubleshoot, but worked well for the project: (while it worked at all).
Next would be the network switches uplinks.
If they support SFPs, you could use a couple of 10G interfaces from host to network switch.
And same again between all network switches.
Here is a link to someone that played with this: https://delightlylinux.wordpress.com...net-and-linux/
Thanks for the links. I will check them. I appreciate you for helping me out. Now, it is my turn to help you. I would like to share my personal experience with you. If you or anyone over here wants to earn money then you can simply visit https://casinosanalyzer.com/online-casinos/jeton here where you will find a lot of sites through which you can make money. You can also read reviews of those sites on the given site link and also you will get a lot of amazing bonus offers and free spins over there.
Last edited by SusanSantiago; 03-21-2024 at 06:53 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.