root is a special case with containers.
Processes which are running in a containerized environment need to have their own personal perspective of what "user-ids" are. Containers handle this by mapping the user-ids that are perceived by the container to the actual user-ids of the host. Obviously, processes from time to time need to switch to a uid=0 context, and to then appear to(!) exercise super-powers within their container. The question is: "does the host see 'root?'" Do the powers of a "super-user" within a container therefore extend to the host? If the container is "non-privileged," as it certainly always should be, a process in a container can believe that it is "root" ... when, to the Linux host, it actually isn't. Linux will dutifully report to the process that it has uid=0. Linux will – up to a point – maintain the illusion of "rootliness." But its actual user-id, as seen by the host, might be 123456. (This arbitrary but non-zero value is unknown to it.) In reality, the process cannot exercise root privileges upon the host. But it is king of its little world. If the container is "privileged," then its user-id really is zero, even on the host. For obvious reasons, I counsel that a container should n-e-v-e-r be privileged. If you need to do something on the host which actually requires root privileges on the host, do it outside of the container. You can see the mapping that the container cannot see, so you can arrange things to look right when viewed from the container's perspective. |
Quote:
Quote:
|
Hi simosx and sundialsvcs,
thank you so much for the explanation. In the meantime I got teamviewer running within the container, thanks to simosī great tutorial. As far as the user-id of processes is concerned I have another question though: Running "ps -ef" in the host gives me amongst other information the following results: Code:
root 2795 1 1 13:07 ? 00:00:23 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log Quote:
Or is it just due to the fact that itīs a GUI-based application (Mapping the user ID ......) Greetings. Rosika |
The user-id's 231xxx are most-likely those of the container occupants.
Containers use a mapping-table to map the user-ids seen by the container (including uid=0) into the actual ones known to the host. Container occupants do not know what the mapping is. And you'd need to look in the same place to see if any of them are mapped to rosika. Also – Wine is a bit of a special case within a containerized environment because it has to have access to the host's X-server. "Google it" for more details. Yes, because the container processes are, in fact, "Linux processes," the host can kill them. But I suggest that you do it within the appropriate container environment. You don't want to do something that might break the illusion . . . |
Hi sundialsvcs,
tnx a lot. Info: I forgot to mention: "lxc config get guiapps security.privileged" gave me no output whatsoever. Thus I assume everything is alright and Iīm running an unprivileged container. ;) Quote:
Yet thereīs still one thing that puzzles me: When logging into my container by using the command lxc console guiapps I cannot make use of top. I get the normal display of processes but it more or less stops working immediately. Furthermore it says: Quote:
Itīs a bit of a puzzler given the fact that when running the container with lxc exec guiapps -- sudo --login --user ubuntu top works as it should! With no weird behaviour whatsoever. And itīs similar with htop. This one at least seems to work, but when closing with CTRL+c the terminal shows Code:
[?64;1;2;6;9;15;18;21;22c Do you have any idea why (h)top is beahving in such a way and what I could do about it? Greetings. Rosika |
Hi Rosika and sundialsvcs,
The instructions at https://blog.simos.info/how-to-run-g...buntu-desktop/ have a step that does userid mapping for your non-root user from the host to the container. It is the part with title Mapping the user ID of the host to the container (PREREQUISITE). In other words, the processes of the container's non-root user account can be affected by the host's non-root user account. For example, your host's non-root user account can kill a GUI process running in the container. However, as far as I understand, the process that launched in the container does not have access to the host's filesystem. The container is not privileged, but there is a hole to enable running GUI apps with graphics acceleration. The alternative would be to use a separate graphics-accelerated X server like Xephyr, and send the container's output there. I have not tried this. Regarding top and htop, they both work for me on the guiapps container and also a vanilla container. I tried with both gnome-terminal and xterm (of course running these terminal emulators on the host). |
Hi Simos,
tnx again. Quote:
And it really seems to do do. :cool: As far as (h)top is concerned you inspired me to test xterm. I installed it in the guiapps-container and upon starting it opened a new window (xterminal). I think thatīs due to the fact that we got graphics-accelerated GUI apps running in the container. And now it worked. Top and htop are both running without complaint even when starting the container with "lxc console guiapps"! No idea why I couldnīt get it working any other way. Yet Iīm very pleased with that workaround. Thanks a lot for your help. And a big thank you to all the other helpers too. Greetings. Rosika :party: |
Quote:
You also determine – chroot-jail style – what filesystem topology it is able to see. You can also set strict limits on what resources it may consume. When a "container occupant" process is dispatched by the host, the entire set of kernel-settings that implements these various illusions is put in place for that process every time it runs. Other processes running on the same machine might not have these illusions put in front of their eyes. (Or, they may have a different set of illusions.) But, conceptually, "a container" is: "a cleverly-crafted and efficient illusion." The occupants believe that they know what the real world looks like, but they are totally wrong. (And, they don't care. It's good enough for them.) |
Hi sundialsvcs,
tnx for the detailed explanation. For the beginner of containerization not everything is easy to understand. Yet the more I read (especially the good explanations and tutorials provided by you and simosx) the more I get it. Itīs a really fascinating field. Perhaps Iīm a bit paranoid but I think nowadays itīs certainly more important than it used to be to implement as many security mechanisms as possible. ;) And itīs not just teamviewer I wanted to get running, but that was my main objective in the beginning. (Thatīs due to the fact that I couldnīt get it running within firejail.) But running it in the lxd-container is successful. Really fascinating! :cool: So now that everything works fine thereīs just one thing which puzzles me a bit. When logging in with "lxc console guiapps" I get the following message: Code:
run-parts: /etc/update-motd.d/98-fsck-at-reboot exited with return code 1 Everything is working fine though. So it doesnīt seem to have any consequences. Yet Iīm curious about it. Do you have any ideas? Greetings. Rosika                     |
motd is: message of the day (see man motd). You may check what is in that file. The message probably means: when you started the container (which is similar to a boot) fsck was executed, but failed for some reason. You may need to check logs related to it.
|
Hi pan64,
tnx for your help. I found the file "98-fsck-at-reboot" But it just contains the script: Code:
#!/bin/sh Code:
-rw-r--r-- 1 root root 3184 Feb 25 14:26 alternatives.log But thanks anyway. Greetings. Rosika |
so most probably the execution of /usr/lib/update-notifier/update-motd-fsck-at-reboot has failed.
there is a dir named fsck, probably you can find something in it (or in kernlog, syslog). Check the date/time of execution in the logs, that may help to find relevant lines. |
Hi again,
I really do appreciate yor help. Tnx. Yet Iīm sorry to say that I couldnīt find anything helpful. The fsck-directory has just 2 entries in it: Code:
Neither kern.log nor syslog mention anything remotely connected to fsck. Iīm a bit at a loss here. But never mind. At least I know the following (thanks to your info): Quote:
On https://superuser.com/questions/8802...-return-code-1 thereīs a user with that problem as well. But I think that one may have different causes.... Greetings. Rosika |
When you log in into Ubuntu, you get a motd with fresh information relating to your system.
There are scripts that are running in Code:
ubuntu@guiapps:~$ ls -l /etc/update-motd.d/ Code:
98-fsck-at-reboot It does not make sense for a container to deal with fsck issues because these are handled by the host. Therefore, the issue here is that the script should return back 0 (no error) and effectively be silent. That is, we have found a bug and need to report it somewhere. Where shall we report it to? Quote:
There is a tab there called Bugs where we could report this. Something like "When logging into a container through the console, we get that error". Ideally, we should figure out where in /usr/lib/update-notifier/update-motd-fsck-at-reboot do we get the error. But how? We can edit the first line at /usr/lib/update-notifier/update-motd-fsck-at-reboot and change into Code:
#!/bin/bash -x Let's log in again through the console. Code:
$ lxc console guiapps Code:
$ lxc stop guiapps I have not dug deeper than this. I suppose that it relates to the changes we did to guiapps in order to run GUI apps. guiapps can see the device name but cannot access it, hence the error. I tried with a normal container and I do not get the issue. Therefore, if it is an issue with only guiapps and similar such containers, then I can say that the message can be safely ignored. |
Hi simosx,
thank you so much for your very detailed explanation. You put a lot of work into it. Very much appreciated. :) I followed it step by step. Quote:
So to sum up: nothing to be really worried about. Fine! Quote:
Glad this could be sorted out. I probably forgot to mention that Iīve been testing these lxd-containers by running them within my virtual machine (bodhi linux, 32 bit). Before installing them on my productive system (Lubuntu, 64 bit) I thought it might be a good idea to get those containers running in my VM. If itīs successful there and I have no problems handling them then theyīll probably work to my satisfaction on my host-system as well. At the beginning I was considering docker but soon learned that it requires a 64-bit host to run on. Therefore I decided on lxd-containers. And now they really work fine. :) So a big thank you to you Simos and all the other helpers. Iīm so glad about the fantastic help I got. Greetings. Rosika |
All times are GMT -5. The time now is 06:58 PM. |