How to keep the containers network device when I delete the hosts
Linux - ContainersThis forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How to keep the containers network device when I delete the hosts
So I got 2 identical machines in the cloud one of them has a public IP fine, without showing in the host. The other machine just will not do the same as the other instance.
So for the problematic machine every restart I do to it automatically creates a Veth on the host. Where as the working machine doesn’t. I have literally copy and pasted there profiles so they are the same except IP’s and yet they both behave differently.
Here is the working machines host and container.
Code:
root@routin:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
27: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:26:3e:ec:b4:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.1.153/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:feef:b49c/64 scope link
valid_lft forever preferred_lft forever
root@routin:~# ip r
default via 10.1.1.1 dev eth0 proto static
10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.153
root@routin:~# exitpaul@ubuntu:~$
paul@ubuntu:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 02:00:25:f2:00:5b brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 10.1.1.211/24 brd 10.1.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::15ff:fef2:5e/64 scope link
valid_lft forever preferred_lft forever
paul@ubuntu:~$ ip r
default via 10.1.1.1 dev eth0 proto static
10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.211
paul@ubuntu:~$
And here is the broken machine:
Code:
root@jam:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
root@jam:~# ip r
root@jam:~# exit
exit
22/09/22 @ 16:27 @ ~
paul@demo ——› ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 02:00:1f:46:00:a9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 10.1.1.236/24 brd 10.1.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.1.1.252/24 brd 10.1.1.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 10.1.1.192/24 brd 10.1.1.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 10.1.1.247/24 brd 10.1.1.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 10.1.1.130/24 brd 10.1.1.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 10.1.1.52/24 brd 10.1.1.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe30::6bff:fe2a:39/64 scope link
valid_lft forever preferred_lft forever
22/09/22 @ 16:27 @ ~
paul@demo ——› ip r
default via 10.1.1.1 dev eth0 proto static
10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.236
22/09/22 @ 16:27 @ ~
paul@demo ——›
How come I can delete a interface off the host for one machine and it’s internal interface stays yet on the other it gets deleted?
"Glad you found it, Paul ..." Now, for the benefit of "the next poor schleb," can you please provide details ... "start to finish." What exactly is your configuration, how did you find the problem, and what exactly was wrong.
In this way, "the next poor schleb" might find a complete solution in just one thread. Instead of a account of someone who had "found it" but didn't say exactly what "it" was.
(Begin at the beginning: "Identical machines in the cloud ..." Well, there are lots of ways to do that. And, so on.)
Last edited by sundialsvcs; 09-26-2022 at 10:11 PM.
"Glad you found it, Paul ..." Now, for the benefit of "the next poor schleb," can you please provide details ... "start to finish." What exactly is your configuration, how did you find the problem, and what exactly was wrong.
In this way, "the next poor schleb" might find a complete solution in just one thread. Instead of a account of someone who had "found it" but didn't say exactly what "it" was.
(Begin at the beginning: "Identical machines in the cloud ..." Well, there are lots of ways to do that. And, so on.)
Good point.
So with Linux Containers its kind of annoying when you first start as there are 3 possible configs. You have Configs, Profiles, Networks and Projects. These can all have there own type of network settings in some can override some are unique. In the Profile I was using the 'nic' type with the typemode of 'routed'.
But in the Config I was using macvlan. To see this I typed.
Code:
sudo lxc config device show jam
To which it then showed.
Code:
eth0:
nictype: macvlan
parent: eth0
type: nic
Yet on the profile of this container you can see it using a Routed nictype and not a Macvlan.
You can see by typing.
Code:
sudo lxc profile show %PROFILENAMEHERE%
And the results would look like
Code:
devices:
eth0:
ipv4.address: 10.1.1.24
nictype: routed
parent: eth0
type: nic
So it's kind of odd learning curve how you can apply multiple networking types in different ways using LXC/LXD so be carefull.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.