My libvirt VMs will not communicate with one another at all. I'm following the setup instructions in Chapters 1 & 2 of Michael Jang's RCHSA/RHCE Exam Prep guide (e.g.,
https://scanlibs.com/rhcsa-linux-cer...udy-guide-7th/). I've tried disabling the firewall on both my host machine and my VMs to rule out forwarding restrictions, and I've tried setting SELinux to permissive mode. Nothing. Furthermore, it doesn't matter if the VMs are on the same or different virtual networks.
Host:
Code:
[root@localhost ipv4]# cat /proc/sys/net/ipv4/ip_forward
1
Code:
...
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 58:91:cf:0f:d8:cc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.130/24 brd 192.168.1.255 scope global dynamic wlp3s0
valid_lft 83070sec preferred_lft 83070sec
inet6 fe80::5a91:cfff:fe0f:d8cc/64 scope link
valid_lft forever preferred_lft forever
31: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 52:54:00:16:83:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
32: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:16:83:5e brd ff:ff:ff:ff:ff:ff
34: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether fe:54:00:52:b8:d7 brd ff:ff:ff:ff:ff:ff
45: virbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 52:54:00:33:b0:3c brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1
valid_lft forever preferred_lft forever
46: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr1 state DOWN qlen 1000
link/ether 52:54:00:33:b0:3c brd ff:ff:ff:ff:ff:ff
48: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 1000
link/ether fe:54:00:f3:e1:55 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fef3:e155/64 scope link
valid_lft forever preferred_lft forever
49: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr1 state UNKNOWN qlen 1000
link/ether fe:54:00:0e:6e:af brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe0e:6eaf/64 scope link
valid_lft forever preferred_lft forever
Virtual network that both of the VMs live on:
Code:
[root@localhost networks]# cat default.xml
<network>
<name>default</name>
<uuid>437371d4-889b-4446-ba4f-5ee20e6964e5</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:16:83:5e'/>
<domain name='default'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.128' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
VM1 (server.example.com):
Code:
[root@localhost ipv4]# ip a sh
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:52:b8:d7 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.50/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe52:b8d7/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 52:54:00:6e:ad:c0 brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
link/ether 52:54:00:6e:ad:c0 brd ff:ff:ff:ff:ff:ff
Code:
[root@localhost ipv4]# ping -c 1 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
From 192.168.122.50 icmp_seq=1 Destination Host Unreachable
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
[root@localhost ipv4]# ping -c 1 192.168.122.50
PING 192.168.122.50 (192.168.122.50) 56(84) bytes of data.
64 bytes from 192.168.122.50: icmp_seq=1 ttl=64 time=0.050 ms
--- 192.168.122.50 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
[root@localhost ipv4]# ping -c 1 192.168.122.150
PING 192.168.122.150 (192.168.122.150) 56(84) bytes of data.
From 192.168.122.50 icmp_seq=1 Destination Host Unreachable
--- 192.168.122.150 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
VM2 (tester2.example.com -- same virtual network):
Code:
[root@localhost ~]# [root@localhost network-scripts]# ip a sh
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:f3:e1:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.150/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fef3:e155/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 52:54:00:e9:fb:c3 brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
link/ether 52:54:00:e9:fb:c3 brd ff:ff:ff:ff:ff:ff
Code:
[root@localhost network-scripts]# ping -c 1 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.365 ms
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms
[root@localhost network-scripts]# ping -c 1 192.168.122.50
PING 192.168.122.50 (192.168.122.50) 56(84) bytes of data.
From 192.168.122.150 icmp_seq=1 Destination Host Unreachable
--- 192.168.122.50 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
[root@localhost network-scripts]# ping -c 1 192.168.122.150
PING 192.168.122.150 (192.168.122.150) 56(84) bytes of data.
64 bytes from 192.168.122.150: icmp_seq=1 ttl=64 time=0.098 ms
--- 192.168.122.150 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms
I've been trying off and on for three weeks to make this work as a platform for testing service availability. The book authors don't have a website or contact information available, else I'd pester them after following their instructions to a T :P
I'm happy to provide any other information that would help in troubleshooting.
And note, the book I'm using did say to use NAT rather than route. So the VMs on both the same and different vnets are supposed to communicate when configured as NAT, according to my study guide.