Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
alias eth0 e1000e
alias eth1 e1000e
alias bond0 bonding
options bond0 miimon=100 mode=0 (think this needs to be mode=4 but not sure).
alias scsi_hostadapter arcmsr
alias scsi_hostadapter1 ahci
alias scsi_hostadapter2 usb-storage
Does everything look correct? When I restart the network services or reboot this is what is in the /var/log/messages:
Quote:
Oct 25 15:31:24 server1 kernel: bonding: bond0: Removing slave eth0
Oct 25 15:31:24 server1 kernel: bonding: bond0: Warning: the permanent HWaddr of eth0 - 00:00:00:00:00:00 - is still in use by bond0. Set the HWaddr of eth0 to a different address to avoid conflicts.
Oct 25 15:31:24 server1 kernel: bonding: bond0: releasing active interface eth0
Oct 25 15:31:24 server1 kernel: eth0: changing MTU from 1500 to 1500
Oct 25 15:31:24 server1 kernel: bonding: bond0: Removing slave eth1
Oct 25 15:31:24 server1 kernel: bonding: bond0: releasing active interface eth1
Oct 25 15:31:24 server1 kernel: eth1: changing MTU from 1500 to 1500
Oct 25 15:31:24 server1 kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Oct 25 15:31:24 server1 kernel: bonding: bond0: Adding slave eth0.
Oct 25 15:31:24 server1 kernel: bonding: bond0: enslaving eth0 as an active interface with a down link.
Oct 25 15:31:24 server1 kernel: bonding: bond0: Adding slave eth1.
Oct 25 15:31:24 server1 kernel: bonding: bond0: enslaving eth1 as an active interface with a down link.
Oct 25 15:31:27 server1 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 25 15:31:27 server1 kernel: bonding: bond0: link status definitely up for interface eth0.
Oct 25 15:31:27 server1 kernel: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
From my understanding for this type of bonding I do not have to do anything special to the switch ports so I have not. If I was going to be doing VLAN tagging then I would have to trunk the ports together. Is this correct?
Is the messages in the messages file normal or do I have something misconfigured?
Ok so I followed those instructions. This is the output from messages file when restarting the network service.
Quote:
Oct 26 11:04:43 server1 kernel: bonding: bond0: Removing slave eth0
Oct 26 11:04:43 server1 kernel: bonding: bond0: Warning: the permanent HWaddr of eth0 - 00:00:00:00:00:00 - is still in use by bond0. Set the HWaddr of eth0 to a different address to avoid conflicts.
Oct 26 11:04:43 server1 kernel: bonding: bond0: releasing active interface eth0
Oct 26 11:04:44 server1 kernel: eth0: changing MTU from 1500 to 1500
Oct 26 11:04:44 server1 kernel: bonding: bond0: Removing slave eth1
Oct 26 11:04:44 server1 kernel: bonding: bond0: releasing active interface eth1
Oct 26 11:04:44 server1 kernel: eth1: changing MTU from 1500 to 1500
Oct 26 11:04:44 server1 kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Oct 26 11:04:44 server1 kernel: bonding: bond0: Adding slave eth0.
Oct 26 11:04:44 server1 kernel: bonding: bond0: enslaving eth0 as an active interface with a down link.
Oct 26 11:04:44 server1 kernel: bonding: bond0: Adding slave eth1.
Oct 26 11:04:44 server1 kernel: bonding: bond0: enslaving eth1 as an active interface with a down link.
Oct 26 11:04:47 server1 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 26 11:04:47 server1 kernel: bonding: bond0: link status definitely up for interface eth0.
Oct 26 11:04:47 server1 kernel: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
Couple problems from log file:
It shows an error with eth0 MAC still in use and wants me to change the MAC address of eth0. Is this necessary?
It shows enslaving eth0 and eth1 but it only states brining up eth0 in the bond. Is this a normal output for bonding 2 NIC's together for bandwidth?
"Oct 26 11:04:43 server1 kernel: bonding: bond0: Warning: the permanent HWaddr of eth0 - 00:00:00:00:00:00 - is still in use by bond0. Set the HWaddr of eth0 to a different address to avoid conflicts."
Wonder if the nic or driver can't support bonding?
The last thing though, is this, 802.11d will, I believe only share the connection, it will not increase your bandwidth PER CONNECTION. so if you bond 2x1gbps interfaces the max throughput you will get through a single connection is 1gbps, but you could have 2x1gbps connections to the same address. I bonded so see what the max bandwidth I could get was from my 10 disk raid array, but this would still saturate the single connection I could get to the machine (if that makes sense).
regarding your error "permanent HWaddr of eth0 - 00:00:00:00:00:00 - is still in use by bond0". No network card should have a blank mac address afaik. Also, once you've created a bond, unless you bring it down properly bond0 will remain intact and you'll be reconfiguring it. I would recommend, if you can, doing a reboot (or maybe a /etc/init.d/networking restart at the very least) to ensure that bond0 is destroyed across reconfiguration attempts
Pull the plug on one of them or wireshare the data.
Depending one the setup, you might notice more bandwidth. I doubt it as most other parts are slower than the bond.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.