KVM networking - using VLAN and Bridges on Debian/Ubuntu
hello
i have tried to setup an environment using Debian 6 and Ubuntu 12.04 (both x64), using VLAN to manage networks between storage server and host/node servers. i have used seem this article that describes what i have tried to achieve (option 1). http://blog.davidvassallo.me/2012/05...to-the-guests/ i have setup: bond for eth0 and eth1 created VLANs on the bond - bond0.10, bond0.100 then created bridges to enable guests to access the networks - br10, br1000 and a bridge to allow untagged traffic to host - br1 Problem: guests can not receive any traffic i have tried checking for traffic on the port connected to br10, and no packets are received. i am checking this while testing (ping, ssh, nmap, http) to connect to this guest host. i would really appreciate if somebody could provide a link to some other posts/article on how to get this setup working. thanks, |
It sounds good to me. Can you post the full output of "ifconfig -a" and "brctl show" on the host. You've checked the port? That's as in inside the vm? What about tcpdumping on the virtual interface on the host side, and also on the bridge, and the bond (when testing from a physically remote machine)
|
hello chris
i have tried to check with tcpdump. but i am receiving a lot of traffic in the host. using like: tcpdump -i bond.10 -A which is printing everything is there a better way of checking it? however, on the guest that uses the attached bridge (br10) there is no traffic at all. it is very puzzling for me as i am no network expert. |
Looking for the traffic for a specific VM guest would be a sane way to do that... just add "host a.b.c.d" to the tcpdump command.
Please provide the outputs requsted above first though. |
i will get those outputs that you asked later today.
this is the network file on one of the hosts/nodes: Code:
# The loopback network interface i have added the hwaddress ether as there were lots of entries on /var/log/syslog of packets originating with same MAC as detination. this seems to solve this issue. thanks, |
hello
apologies but i could not get the output of the system yet. as it is my own system, i will have to do it later own tonight. thanks, |
hello
i have restart the whole setup from scratch and it seems that there is an issue with the bodnding. i can get all the VLANs and bridges working fine against eth0 or eth1. but as i add eth0 and eth1 to a bond and then VLANs and bridge against it. i find errors on the logs and connectivity with guests fail. so for now i will use eth0; and revisit it in the future. thanks for taking the time to look into it. regards, Nicolas |
halp
sorry to kick an old topic but did anyone ever find a resloution to this? Im having the exact same problem and i do not wish to settle for unbonded nics because a switch failure will result in the loss of the host which is the exact reason we purchase redundant switches for. i would be absolutely grateful if anyone has any futher information on this
|
hello
i will have a look on this setup; it is a while since i last checked. and there were a number of packages updates, and other people who looked into this matter. if there are any changes - i will post back here. from the top of my head, i recall having somebody who was an expert on Cisco switches changing some settings trying to solve this matter. thus keep in mind that you may find your switch not playing correctly - do not think the problem is on the box alone. tchau |
re:
our problem isnt the cisco switch some of our ceph nodes use lacp bonding and that works, and we can get all of our hypervisor nodes to work on active-backup i.e. pull cables and not lose packets, we can even get to the point where some of the bridges pass traffic. here is what one of our attempts at a conf file looks like (there are many many others) this is for cloudstack btw
auto em1 iface em1 inet manual bond-master bond0 bond-primary em1 auto em2 iface em2 inet manual bond-master bond0 auto bond0 iface bond0 inet manual bond-mode active-backup bond-miimon 100 bond-slaves em1 em2 # Management auto bond0.100 iface bond0.100 inet static address 10.100.0.33 netmask 255.255.255.0 network 10.100.0.0 broadcast 10.100.0.255 gateway 10.100.0.1 dns-nameservers 10.100.0.4 dns-search dcnfargo.ntgcloud auto cloudbr0 iface cloudbr0 inet manual bridge_ports bond0 bridge_fd 5 bridge_stp off bridge_maxwait 1 |
(with shame)
apologies for the time probably not relevant anymore. *but* for completeness: this it the configuration that is working Code:
Nicolas |
All times are GMT -5. The time now is 08:22 PM. |