[SOLVED] general question: container isolation-level
Linux - ContainersThis forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Processes which are running in a containerized environment need to have their own personal perspective of what "user-ids" are. Containers handle this by mapping the user-ids that are perceived by the container to the actual user-ids of the host. Obviously, processes from time to time need to switch to a uid=0 context, and to then appear to(!) exercise super-powers within their container.
The question is: "does the host see 'root?'" Do the powers of a "super-user" within a container therefore extend to the host?
If the container is "non-privileged," as it certainly always should be, a process in a container can believe that it is "root" ... when, to the Linux host, it actually isn't. Linux will dutifully report to the process that it has uid=0. Linux will – up to a point – maintain the illusion of "rootliness." But its actual user-id, as seen by the host, might be 123456. (This arbitrary but non-zero value is unknown to it.) In reality, the process cannot exercise root privileges upon the host. But it is king of its little world.
If the container is "privileged," then its user-id really is zero, even on the host.
For obvious reasons, I counsel that a container should n-e-v-e-r be privileged. If you need to do something on the host which actually requires root privileges on the host, do it outside of the container. You can see the mapping that the container cannot see, so you can arrange things to look right when viewed from the container's perspective.
Last edited by sundialsvcs; 02-22-2018 at 03:34 PM.
For obvious reasons, I counsel that a container should n-e-v-e-r be privileged. If you need to do something on the host which actually requires root privileges on the host, do it outside of the container. You can see the mapping that the container cannot see, so you can arrange things to look right when viewed from the container's perspective.
To check whether a LXD container is privileged, run
Quote:
lxc config get guiapps security.privileged
By default, LXD containers are not privileged. You have to set the flag security.privileged in order for them to be such.
In the meantime I got teamviewer running within the container, thanks to simosī great tutorial.
As far as the user-id of processes is concerned I have another question though:
Running "ps -ef"in the host gives me amongst other information the following results:
So I see "rosika" as user-id for "teamviewer" (which is running in the container). Is this right?
Quote:
If the container is "privileged," then its user-id really is zero, even on the host.
O.K., at least the user-id isnīt zero. Thatīs good. But is it alright that it is "rosika". I mean thus I can kill the process from the host (not only from within the container).
Or is it just due to the fact that itīs a GUI-based application (Mapping the user ID ......)
The user-id's 231xxx are most-likely those of the container occupants.
Containers use a mapping-table to map the user-ids seen by the container (including uid=0) into the actual ones known to the host. Container occupants do not know what the mapping is. And you'd need to look in the same place to see if any of them are mapped to rosika.
Also – Wine is a bit of a special case within a containerized environment because it has to have access to the host's X-server. "Google it" for more details.
Yes, because the container processes are, in fact, "Linux processes," the host can kill them. But I suggest that you do it within the appropriate container environment. You don't want to do something that might break the illusion . . .
Last edited by sundialsvcs; 02-23-2018 at 07:26 AM.
tnx a lot.
Info: I forgot to mention: "lxc config get guiapps security.privileged" gave me no output whatsoever. Thus I assume everything is alright and Iīm running an unprivileged container.
Quote:
Yes, because the container processes are, in fact, "Linux processes," the host can kill them. But I suggest that you do it within the appropriate container environment. You don't want to do something that might break the illusion . . .
Well, I did it just once in order to make a correct statement here. Hope it didnīt hurt too much. But thanks for your suggestion.
Yet thereīs still one thing that puzzles me:
When logging into my container by using the command lxc console guiapps I cannot make use of top.
I get the normal display of processes but it more or less stops working immediately. Furthermore it says:
Quote:
Unknown command - try 'h' for help
I have no idea why that is.
Itīs a bit of a puzzler given the fact that when running the container with lxc exec guiapps -- sudo --login --user ubuntu top works as it should! With no weird behaviour whatsoever.
And itīs similar with htop. This one at least seems to work, but when closing with CTRL+c the terminal shows
Code:
[?64;1;2;6;9;15;18;21;22c
Thatīs also the case with top.
Do you have any idea why (h)top is beahving in such a way and what I could do about it?
The instructions at https://blog.simos.info/how-to-run-g...buntu-desktop/
have a step that does userid mapping for your non-root user from the host to the container.
It is the part with title Mapping the user ID of the host to the container (PREREQUISITE).
In other words, the processes of the container's non-root user account can be affected by the host's non-root user account. For example, your host's non-root user account can kill a GUI process running in the container.
However, as far as I understand, the process that launched in the container does not have access to the host's filesystem.
The container is not privileged, but there is a hole to enable running GUI apps with graphics acceleration.
The alternative would be to use a separate graphics-accelerated X server like Xephyr, and send the container's output there.
I have not tried this.
Regarding top and htop, they both work for me on the guiapps container and also a vanilla container.
I tried with both gnome-terminal and xterm (of course running these terminal emulators on the host).
It is the part with title Mapping the user ID of the host to the container (PREREQUISITE).
Yes, I followed your instructions precisely. I just wanted to make sure that everythingīs running o.k.
And it really seems to do do.
As far as (h)top is concerned you inspired me to test xterm.
I installed it in the guiapps-container and upon starting it opened a new window (xterminal). I think thatīs due to the fact that we got graphics-accelerated GUI apps running in the container.
And now it worked. Top and htop are both running without complaint even when starting the container with "lxc console guiapps"!
No idea why I couldnīt get it working any other way. Yet Iīm very pleased with that workaround.
Thanks a lot for your help.
And a big thank you to all the other helpers too.
In other words, the processes of the container's non-root user account can be affected by the host's non-root user account. For example, your host's non-root user account can kill a GUI process running in the container.
However, as far as I understand, the process that launched in the container does not have access to the host's filesystem.
The container is not privileged, but there is a hole to enable running GUI apps with graphics acceleration.
The user-ids seen by the container occupant are specific to the container's environment and the container occupant thinks that he has control of them. What he does not know is that they are being mapped to host-side user-ids. (Ditto group-ids.) You determine the mapping when you set up the container. Yes, other processes which are not running with the container's rose-colored-glasses on can also see those processes ... as they actually are.
You also determine – chroot-jail style – what filesystem topology it is able to see. You can also set strict limits on what resources it may consume.
When a "container occupant" process is dispatched by the host, the entire set of kernel-settings that implements these various illusions is put in place for that process every time it runs. Other processes running on the same machine might not have these illusions put in front of their eyes. (Or, they may have a different set of illusions.) But, conceptually, "a container" is: "a cleverly-crafted and efficient illusion." The occupants believe that they know what the real world looks like, but they are totally wrong. (And, they don't care. It's good enough for them.)
Last edited by sundialsvcs; 02-23-2018 at 11:49 AM.
tnx for the detailed explanation.
For the beginner of containerization not everything is easy to understand. Yet the more I read (especially the good explanations and tutorials provided by you and simosx) the more I get it. Itīs a really fascinating field.
Perhaps Iīm a bit paranoid but I think nowadays itīs certainly more important than it used to be to implement as many security mechanisms as possible.
And itīs not just teamviewer I wanted to get running, but that was my main objective in the beginning.
(Thatīs due to the fact that I couldnīt get it running within firejail.)
But running it in the lxd-container is successful. Really fascinating!
So now that everything works fine thereīs just one thing which puzzles me a bit.
When logging in with "lxc console guiapps" I get the following message:
Code:
run-parts: /etc/update-motd.d/98-fsck-at-reboot exited with return code 1
Iīm not sure what to make of it and what it really means.
Everything is working fine though. So it doesnīt seem to have any consequences. Yet Iīm curious about it.
motd is: message of the day (see man motd). You may check what is in that file. The message probably means: when you started the container (which is similar to a boot) fsck was executed, but failed for some reason. You may need to check logs related to it.
so most probably the execution of /usr/lib/update-notifier/update-motd-fsck-at-reboot has failed.
there is a dir named fsck, probably you can find something in it (or in kernlog, syslog). Check the date/time of execution in the logs, that may help to find relevant lines.
I really do appreciate yor help. Tnx.
Yet Iīm sorry to say that I couldnīt find anything helpful.
The fsck-directory has just 2 entries in it:
Code:
ubuntu@guiapps /v/l/fsck> ls -l
total 0
-rw-r----- 1 root adm 0 Jan 26 04:44 checkfs
-rw-r----- 1 root adm 0 Jan 26 04:44 checkroot
Both files are empty.
Neither kern.log nor syslog mention anything remotely connected to fsck.
Iīm a bit at a loss here.
But never mind. At least I know the following (thanks to your info):
Quote:
most probably the execution of /usr/lib/update-notifier/update-motd-fsck-at-reboot has failed
BTW: I found out that Iīm not alone with experiencing such behaviour.
On https://superuser.com/questions/8802...-return-code-1 thereīs
a user with that problem as well. But I think that one may have different causes....
When you log in into Ubuntu, you get a motd with fresh information relating to your system.
There are scripts that are running in
Code:
ubuntu@guiapps:~$ ls -l /etc/update-motd.d/
total 8
-rwxr-xr-x 1 root root 1220 Oct 22 2015 00-header
-rwxr-xr-x 1 root root 1157 Jun 14 2016 10-help-text
-rwxr-xr-x 1 root root 334 Sep 19 19:17 51-cloudguest
-rwxr-xr-x 1 root root 97 May 24 2016 90-updates-available
-rwxr-xr-x 1 root root 299 Jul 22 2016 91-release-upgrade
-rwxr-xr-x 1 root root 111 May 11 2017 97-overlayroot
-rwxr-xr-x 1 root root 142 May 24 2016 98-fsck-at-reboot
-rwxr-xr-x 1 root root 144 May 24 2016 98-reboot-required
ubuntu@guiapps:~$
For example, 90-updates-available will check whether updates are available, and report back what security updates and other updates are there.
Code:
98-fsck-at-reboot
reports back to motd whether a fsck is scheduled, or required.
It does not make sense for a container to deal with fsck issues because these are handled by the host.
Therefore, the issue here is that the script should return back 0 (no error) and effectively be silent.
That is, we have found a bug and need to report it somewhere.
So, here is the project page, https://launchpad.net/ubuntu/+source/update-notifier
There is a tab there called Bugs where we could report this.
Something like "When logging into a container through the console, we get that error".
Ideally, we should figure out where in /usr/lib/update-notifier/update-motd-fsck-at-reboot do we get the error. But how?
We can edit the first line at /usr/lib/update-notifier/update-motd-fsck-at-reboot and change into
Code:
#!/bin/bash -x
The -x says to show tracing information.
Let's log in again through the console.
Code:
$ lxc console guiapps
To detach from the console, press: <ctrl>+a q
Ubuntu 16.04.3 LTS guiapps console
guiapps login: ubuntu
Password:
Last login: Sun Feb 25 22:01:12 UTC 2018 on console
+ set -e
+ '[' '' = --force ']'
+ stamp=/var/lib/update-notifier/fsck-at-reboot
+ '[' -e /var/lib/update-notifier/fsck-at-reboot ']'
++ stat -c %Y /var/lib/update-notifier/fsck-at-reboot
+ stampt=1519592650
+++ awk '{print $1}' /proc/uptime
++ date -d 'now - 16759.00 seconds' +%s
+ last_boot=1519579337
++ date +%s
+ now=1519596096
+ '[' 1519596250 -lt 1519596096 ']'
+ '[' 1519592650 -gt 1519596096 ']'
+ '[' 1519592650 -lt 1519579337 ']'
+ '[' -n '' ']'
+ cat /var/lib/update-notifier/fsck-at-reboot
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.13.0-36-generic x86_64)
...
Hmm, we do not get the error now. What's wrong? Perhaps we need to restart the container first?
Code:
$ lxc stop guiapps
$ lxc start guiapps
$ lxc console guiapps
To detach from the console, press: <ctrl>+a q
Ubuntu 16.04.3 LTS guiapps console
guiapps login: ubuntu
Password:
Last login: Sun Feb 25 22:01:36 UTC 2018 on console
+ set -e
+ '[' '' = --force ']'
+ stamp=/var/lib/update-notifier/fsck-at-reboot
+ '[' -e /var/lib/update-notifier/fsck-at-reboot ']'
++ stat -c %Y /var/lib/update-notifier/fsck-at-reboot
+ stampt=1519592650
+++ awk '{print $1}' /proc/uptime
++ date -d 'now - 61.00 seconds' +%s
+ last_boot=1519596455
++ date +%s
+ now=1519596516
+ '[' 1519596250 -lt 1519596516 ']'
+ NEEDS_FSCK_CHECK=yes
+ '[' -n yes ']'
+ check_occur_any=
++ awk '$5 ~ /^ext(2|3|4)$/ { print $1 }'
++ mount
+ ext_partitions='/dev/sda5
/dev/sda5
/dev/sda1
/dev/sda1
/dev/sda1
/dev/sda1
/dev/sda1'
+ for part in '$ext_partitions'
++ dumpe2fs -h /dev/sda5
+ dumpe2fs_out='Couldn'\''t find valid filesystem superblock.'
run-parts: /etc/update-motd.d/98-fsck-at-reboot exited with return code 1
ubuntu@guiapps:~$
The command that fails is dumpe2fs -h /dev/sda5. That's the root device of the host...
I have not dug deeper than this. I suppose that it relates to the changes we did to guiapps
in order to run GUI apps. guiapps can see the device name but cannot access it, hence the error.
I tried with a normal container and I do not get the issue.
Therefore, if it is an issue with only guiapps and similar such containers, then I can say that the message
can be safely ignored.
thank you so much for your very detailed explanation. You put a lot of work into it. Very much appreciated.
I followed it step by step.
Quote:
[...] guiapps can see the device name but cannot access it, hence the error. [...] Therefore, if it is an issue with only guiapps and similar such containers, then I can say that the message can be safely ignored.
O.K. Thatīs the explanation then.
So to sum up: nothing to be really worried about. Fine!
Quote:
I tried with a normal container and I do not get the issue.
Yeah, itīs the same with me. When I log into the container with "lxc exec guiapps -- sudo --login --user ubuntu" I donīt get this error-message either. Just when using "lxc console guiapps".
Glad this could be sorted out.
I probably forgot to mention that Iīve been testing these lxd-containers by running them within my virtual machine (bodhi linux, 32 bit).
Before installing them on my productive system (Lubuntu, 64 bit) I thought it might be a good idea to get those containers running in my VM. If itīs successful there and I have no problems handling them then theyīll probably work to my satisfaction on my host-system as well.
At the beginning I was considering docker but soon learned that it requires a 64-bit host to run on.
Therefore I decided on lxd-containers. And now they really work fine.
So a big thank you to you Simos and all the other helpers.
Iīm so glad about the fantastic help I got.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.