Linux - ContainersThis forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So its been years of me not quite understanding containers. Even the fallback of trail & error hasn't helped. It's obviously a conceptual gap. This thought is prompted by how documentation for the various container technologies often goes into really odd tangents, like the time I was reading up on either LXC or D and it started out well but then it went into bizarro land going on about bridges to get net. Umm, dude: when I install a distro inside qemu there is none of this bridge stuff. What gives?
Finally, after time, I came up with a single question that may settle things, in my mind anyway, with regard to containers. I've even managed to condense it into a simple true/false question. Ready?
True or false? Containers can be used by my non-root user to seamlessly run programs from other distros.
obviously it depends on your environment and your programs, but in general yes, containers can be used by non-root users to run programs from other distros.
The concept of a bridge is from networking. The point of containers is separation (in particular, process separation but not limited to ONLY that.)
If you virtualize the devices or isolate the container form physical devices, you then need some kind of filter or "bridge" to make use of those physical resources. Bridged networking allows an isolated container or virtual guest to make use of a bridged network device to appear on you network as an independent node. This is only one of many uses for bridged networking, but the one that most applies to containers.
The more you understand about Linux, the inner workings of operating systems and resource management, networking, and administration, the better you will understand both containers and physical servers.
The concept of a bridge is from networking. The point of containers is separation (in particular, process separation but not limited to ONLY that.)
If you virtualize the devices or isolate the container form physical devices, you then need some kind of filter or "bridge" to make use of those physical resources. Bridged networking allows an isolated container or virtual guest to make use of a bridged network device to appear on you network as an independent node. This is only one of many uses for bridged networking, but the one that most applies to containers.
The more you understand about Linux, the inner workings of operating systems and resource management, networking, and administration, the better you will understand both containers and physical servers.
I know *nothing* about this, but... (True I guess, fwiw)
I recently started reading a (library) intro book on Docker.
There's diagrams that contrast it to a VM,
where it instead uses the host OS
BUT each has it's own: IP!/hostname
(& disk: huh?)
And typically 1 app=pgm
Btw, it's related to cgroups & namespaces in the kernel.
I wonder if snaps/flatpacks work on Slack (probably not, but maybe LFS: snapd wants me to reboot my MX with systemd
Maybe the solution is: static (binaries): static-tcpdump works on *MY* SlackWaremll which has (basically) ONE binary (&kernel): busybox! All the CLI I need to study Linux (but no web-browser=GUI unfortunately)
Last edited by GentleThotSeaMonkey; 08-01-2021 at 09:18 PM.
For a definitive answer we would have to have a very specific container in mind: there are many kinds and they are not all the same.
LXC containers were used to isolate processes for security some time before they were adequate to run a full isolated operating system based upon the host kernel. OpenVZ containers were OS containers from the beginning, and not useful for isolating a single process. Continer-world is HUGE, and this discussion has barely touched upon the view through a single keyhole.
So the answer is "true", for some selection of container technology.
My usual explanation is: "a containerized process is wearing rose-colored glasses."
The process is actually running directly on the host operating system, but it has no idea what that physical environment actually looks like, because of the glasses. It sees what it wants and needs to see, but it has no idea how they are actually provided. (Nor does it care.) It perceives the existence of "a file system" and "a network" and perhaps other resources, and, while those resources are "real enough" that the process is able to use them, they're all an illusion. Likewise, the process might perceive that it is "running as 'root'" and that it is "able to do 'rootly' things," when according to the host operating system it actually is not. (And, once again, it doesn't [have to ...] care.) Generally speaking, the containers will perceive that their "user-id/group-id" is whatever they need for them to be.
This results in "VM-like isolation" without the overhead of actual virtualization. It also greatly simplifies the implementation of "cloud computing" timesharing: the cloud manager has complete latitude in managing the physical environment because the containers never directly see it. As long as the illusion is seamlessly maintained, the clients won't break.
A key difference between containers and VMs is that the operating system is the same ... because containerized processes do run directly on the host. ("Docker for Windows" actually runs its own proprietary virtual-machine monitor to host the containers in ...)
The Linux kernel implements various facilities which, taken together, create the perfect illusion. Containerization software then "wraps it all up with a pretty, manageable bow." Docker, in particular, takes this ease-of-management idea rather to an extreme ... allowing you to, as it were, "grab complete subsystems right off the shelf of a free marketplace." Plug 'em in and turn 'em on without having to look inside the boxes. Someone else did the dirty work for you.
Originally, Docker used the lxc/lxd containerization infrastructure, but it has since moved on to use something specific to itself.
Last edited by sundialsvcs; 08-11-2021 at 08:38 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.