Bridged network, devices using bridge fail until pinged.
I am pretty sure the problem here is networking side, not virtualization side. If I am wrong than an op should move this.
My network is as follows eth0 and eth1, both are nvidia network cards both interfaces are up, but have no ip address bond0 has eth0 and eth1 as slaves, no ip address set br0 is created using bond0, it has 2 networks, my static ip network, as well as my internal network. Here is what all this looks like: Code:
bond0 Link encap:Ethernet HWaddr 00:04:4b:15:5e:8d It uses dhcpcd to obtain a lease, this works fine, it obtains an ip address on my internal network (192.168.0.x), the proper route is added (192.168.0.1) and the nameserver (192.168.0.1) is set in the resolv.conf file However, I cannot ping any ip address on any network, nor can I ping internet addresses or resolve their dns. If I use another box on my network to ping or ssh to the vm suddenly the vm networking works 100% and it can ping and connect to any external or internal address. This happens on every vhost, gentoo, ubuntu, freebsd, weather it is a livecd or an already installed image. Since all vm's have the same problem, and there is no configuration for network aside from pointing at a bridge, I am assuming the problem is with my non-vm networking config |
Google to the rescue:
https://bugzilla.redhat.com/show_bug.cgi?id=487763#c11 This explains the problem, it seems to be a kernel bug. The comment linked to includes a perl script workaround. I have yet to test it. |
didn't help.
|
For now I have given up. One network interface has my static and internal ip, the other has no ip but is bridged, vm's use that one.
|
I haven't used bonding, but for bridging, you want to add interfaces to the bridge and then configure the bridge device with an IP address. The individual NICs don't get their own IP addresses. They are just brought up after bonding.
I think it is the same for bonding. The bond interface (e.g. bond0) should have an IP and routes defined, but the interfaces enslaved should not. Also, make sure that if you create a VPN tunnel that the bonding is done first. The order is important. Make sure you don't have configurations for the interfaces making up the bond device that could cause them to be configured before the bond. Looking at some howto's on Google, an interface that has a route defined can cause problems. |
No vpn.
Take a look at the ifconfig I posted, I did not give the devices addresses, I added them to the bond, I did not give the bond an address, I used it to create the bridge, I then gave my bridge the addresses. |
Your other post said "One network interface has my static and internal ip" which seemed to indicate otherwise, and didn't match the original ifconfig results you posted.
Does the bonded interface work on the master without first being pinged? |
What I meant was the bond had 2 ip addresses, sorry.
The bonded interface worked fine on the master at all times. The problem only occured within VM's. At this point I have a working config that just doesn't use bonding. I am no longer able to simply go back to the bonded form for testing to try out any solutions this thread may offer. |
All times are GMT -5. The time now is 03:22 PM. |