Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
KVM host is on the left(.250). VM is on the right(.240). VM always has nice ping, but the host has a very regular packet loss. They share the same NIC. Never seen this before, but don't have much experience with KVM either. Is it a tweak somewhere that prioritizes the VM?
Is it in a pattern of 8 failures, 1 success? or is it more random than that? Just thinking might be some form of packet limiting or firewalling maybe at play.
Also the full output of "ip addr show" from the host might help if you have nothing in there that is sensitive. I.E.
Code:
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether e0:cb:4e:26:a5:ce brd ff:ff:ff:ff:ff:ff
3: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000
link/ether e0:cb:4e:26:a5:cd brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether e0:cb:4e:26:a5:cd brd ff:ff:ff:ff:ff:ff
inet 192.168.1.10/24 brd 192.168.1.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::e2cb:4eff:fe26:a5cd/64 scope link
valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 52:54:00:03:f9:89 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
link/ether 52:54:00:03:f9:89 brd ff:ff:ff:ff:ff:ff
7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 500
link/ether fe:54:00:29:64:37 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe29:6437/64 scope link
valid_lft forever preferred_lft forever
9: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 500
link/ether fe:54:00:cc:d6:bb brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fecc:d6bb/64 scope link
valid_lft forever preferred_lft forever
10: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 500
link/ether fe:54:00:71:30:b1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe71:30b1/64 scope link
valid_lft forever preferred_lft forever
11: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 500
link/ether fe:54:00:45:b3:40 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe45:b340/64 scope link
valid_lft forever preferred_lft forever
Last edited by r3sistance; 05-04-2017 at 06:31 AM.
It went from a pattern of 8 failures to a steady 6 after a reboot. It was my own bad however. I was using macvtap passthrough because of no available VT-d PCI passthrough support. I THOUGHT I had set host on eth0 and the VM on eth1, but in fact the eth0 lived its own life and both the host and VM were sharing eth1. /blame wicd.
Having set interfaces properly like I normally do (thought wicd since it seemed practical, but it crapped), host now runs on eth0 and VM eth1. Everything looks good now.
I believe the pattern of packet loss came from macvtap passing as much packets as possible to the VM, so the host was on its knees when they both were on same interface.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.