My apologies for not following up on this yet, I've just been incredibly busy. Here's some of what you asked for. The network configuration for the containers all look like this:
/etc/sysconfig/network-scripts:
DEVICE=eth0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=10.0.1.x
NETMASK=255.255.255.0
GATEWAY=10.0.1.1
The container host (an AWS instance) uses a bridged interface:
/etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
NAME=br0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Bridge
USERCTL=no
NM_CONTROLLED=no
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE="eth0"
NAME="eth0"
TYPE="Ethernet"
ONBOOT="yes"
BRIDGE="br0"
This is all pretty standard stuff. The routing table on the host looks like this:
Code:
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default ip-10-0-1-1.us- 0.0.0.0 UG 0 0 0 br0
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
whereas on one of the containers we see this:
Code:
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.1.1 0.0.0.0 UG 0 0 0 eth0
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
link-local 0.0.0.0 255.255.0.0 U 1032 0 0 eth0
The gateway cannot be pinged from the containers, and that's the crux of the problem. I've been installing host/container configurations like this on local hardware without problems so the issue is somehow related to AWS, just not sure how.
I'll try to get some tcpdump data in a day or two.