LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Containers (https://www.linuxquestions.org/questions/linux-containers-122/)
-   -   general question: container isolation-level (https://www.linuxquestions.org/questions/linux-containers-122/general-question-container-isolation-level-4175623530/)

Rosika 02-11-2018 07:08 AM

general question: container isolation-level
 
Hi altogether,

Iīve got a general question about Linux containers, in particlar with regard to LXD/LXC.

When running programms/processes within an LXC-container: to what extent are they isolated from the host system?

I mean I can run programs within a virtual machine (with a certain amount of overhead) or run programs within a sandbox (like firejail).
In those cases isolation from the host seems to be quite effective, at least to my knowledge.

How does the isolation-level of those compare to LXD/LXC?

Thanks in advance for any information.

Greetings
Rosika :scratch:

P.S.:
system: Linux/Lubuntu 16.04.3 LTS, 64 bit

pan64 02-11-2018 10:21 AM

I have no idea what do you mean by quite effective, but I think the isolation level is quite effective on LXD/LXC too.
Do you have any special use case, issue or example to discuss?

simosx 02-13-2018 05:26 AM

Quote:

Originally Posted by Rosika (Post 5818460)
Iīve got a general question about Linux containers, in particlar with regard to LXD/LXC.

When running programms/processes within an LXC-container: to what extent are they isolated from the host system?

I mean I can run programs within a virtual machine (with a certain amount of overhead) or run programs within a sandbox (like firejail).
In those cases isolation from the host seems to be quite effective, at least to my knowledge.

How does the isolation-level of those compare to LXD/LXC?

Thanks in advance for any information.

Virtualbox uses hardware-assisted virtualization, and creates a new environment that will boot a full operating system.
That is, it will boot the Linux kernel and then start the rest of the operating system.
For usability, Virtualbox offers a trick to share the graphics card with the host operating system. This is required
if we want hardware acceleration for graphics. If we enable 3D acceleration, we can even play 3D games in Virtualbox.
In addition, when you install the Guest Additions, you let more communication between the VM and your host.
This is good for usability, because you can, for example, copy/paste between the host and the VM.
However, it can lead to security vulnerabilities such as those described at https://www.techrepublic.com/article...in-virtualbox/
As always with security vulnerabilities, you install the updates and you are OK.

LXC 1.0, LXD and firejail use security features (namespaces, cgroups) of the Linux kernel
in order to run processes isolated from the rest of the system. The end result is different,
and you have the choice to select which one is better for you.

The big difference that LXD gives you, is that you create machine containers.
A machine container is like a virtual machine. You start the machine container
and it boots up, gets an IP address and is ready to use. The machine container shares the same
Linux kernel as the host, therefore it does not boot a separate Linux kernel.
Because of that, you can have ten times more machine containers on the same server than virtual machines.
And if you create machine containers with distros like Alpine, you can fit even more machine containers.

The Linux kernel security features provide the protection and isolation of what is going on inside the container.
The container can have no access at all at the host, even no network connectivity.
If you use the LXD installation defaults, then the containers will get a private IP address.
The containers by default are not accessible from the local network, and they cannot access directly
computers from your local (i.e. home) network. They get Internet connectivity though.

However, to make the best use of LXD, you may want to relax a bit the restrictions.
For example, if you follow the guide at https://blog.simos.info/how-to-make-...from-your-lan/
you can make some of the containers to appear as standalone hosts on your local network.
That is, if you computer has IP 192.168.1.10, then you can get three containers exposed (and accessible) to the local network
with IP addresses 192.168.1.11, 192.168.1.12, 192.168.1.13.
No need for separate physical computers, you can just use machine containers.
It is up to use to make sure that these special machine containers are secure (do not install bad software).

Also, you can run GUI apps in a machine container according to the guide at
https://blog.simos.info/how-to-run-g...buntu-desktop/
For example, you can run Steam inside the container with full graphics acceleration.
The way it works, is that the machine container has also full access to the X session.
This means that you should not run unknown GUI programs in that way.
On the other hand, it's up to you to use Xephyr instead, which provides isolation.

Overall, LXD is in the same group with LXC 1.0, firejail and others. Depending on your needs, it may be suitable to select LXD over the others.

wpeckham 02-13-2018 05:31 AM

In my experience, host processes are very well hidden from the container, container processes are not well hidden from the host. It somewhat depends upon what access to the process information worries you.

Rosika 02-13-2018 08:59 AM

Hi pan64,

thanks for your answer.
My question was rather of theoretical nature. I more or less wnated to know whether a process running within a container is as secure as running it in a sandbox like firejail.
Secure in the sense of providing a "shield" against unwanted changes of the host.

Greetings.
Rosika

Rosika 02-13-2018 09:02 AM

Hi wpeckham,

thank you for the answer.
I have no particular scenario in mind. I just wanted to see if there are any differences in general as compared to a sandbox (firejail).

Cheerio.
Rosika

Rosika 02-13-2018 09:16 AM

Hi simosx,

thank you so much for your very detailed answer and the interesting links.

I also read about the principle differences between virtual machines (based on a hypervisor etc) and containers.
But the way you pointed out the important points is very informative and good to read.

I already installed lxd within a virtual machine (xubuntu 16.04.3 LTS, 32 bit) and "played around" with it in order to gain some experience before installing it on my host (Lubuntu 16.04.3 LTS).

In a whitepaper provided by canonical ("For CTO’s: the no-nonsense way to accelerate your business with containers") it says on page 10:
Quote:

However, there are disadvantages of containers compared to traditional
hypervisor-based virtualisation that include:
[...]
Fourthly, there are disadvantages of running a single kernel
– an ‘all eggs in one basket’ argument. If that kernel crashes or is exploited,
perhaps due to a vulnerability within an application which itself is insecure,
the whole machine is compromised; and upgrading that kernel is thus
arguably more problematic (as more depends on it) than upgrading kernels
in a traditional hypervisor based environment
This is the bit that worries me.
The author talks about about the possibility of the kernel being crashed "due to a vulnerability within an application which itself is insecure".

Iīm not quite sure what he/she means by that. What application? The one running within the container or the one running on the host-system? I mean is it as easy as that for the containerized application to get out of its container and do damage to the kernel?

Therefore my question.

Greetings.
Rosika

pan64 02-13-2018 09:34 AM

containers use the kernel of their host, they have no "own" kernels. So if there was an app which can harm the system most probably it will crash the host (running several hosts), not only the container itself.
From the other hand if there was a kernel vulnerability you need to upgrade all of your hosts, including VMs, therefore you need to reboot/restart all of them (including bare metals, VMs and containers).

Rosika 02-13-2018 09:54 AM

Hi pan64,

thanks again for your reply.
Quote:

So if there was an app which can harm the system most probably it will crash the host
Thatīs a bit of a shame. But thatīs what I already feared.
So is it safe to say that running an "untrusted" application in a virtual machine or even a sandbox for that matter provides a higher degree of safety?

Rosika

simosx 02-13-2018 11:21 AM

Hi Rosika,

Quote:

Fourthly, there are disadvantages of running a single kernel
– an ‘all eggs in one basket’ argument. If that kernel crashes or is exploited,
perhaps due to a vulnerability within an application which itself is insecure,
the whole machine is compromised; and upgrading that kernel is thus
arguably more problematic (as more depends on it) than upgrading kernels
in a traditional hypervisor based environment
Finding a way to crash the Linux kernel is not so easy. We are talking here about the Linux kernel code sans device drivers.

The quote refers more to mission-critical applications, or if you are researching malware samples.
In that case, you would probably get hardware that supports both VT-x and VT-d,
and definitely use a VM.

With LXD, you can set up restrictions to the amount of computing power, memory and disk space for a container.
In that way, you can control a container that happens to be accessible from outsiders.
For example, have a look at https://linuxcontainers.org/lxd/try-it/
There, you can get a shell into an LXD container and using for free for 30 minutes.
In this LXD container, you get your own nested LXD container that you can use to launch your own containers.
You can give it a go and see how well it works.
The source code of this demo server is also available, https://insights.ubuntu.com/2016/04/...xd-in-lxd-812/

sundialsvcs 02-13-2018 02:30 PM

I like to use the analogy that "containers are a very clever illusion." :)

They bring together several different Linux kernel facilities to create an environment in which a process can believe that it knows what the filesystem looks like, and it believes that it knows what its user-id is, and it believes that it can become root when it wants to. And so on. But, none of this is actually the truth.

The container occupant's "rose-colored glasses" view of the world is actually mapped onto the actual environment of the Linux host, but the container occupants can't see that.

Processes running outside of the container environment can perceive the processes that are running in "container mode," but not the other way around.

So what this gives you is ... "good isolation, cheaply." You don't have the overhead of a virtual machine. You do have the isolation that you need. Although it isn't the same kind of isolation that a VM provides, it is very often good enough. (The overhead of a virtual-machine environment is quite noticeable when you don't have it.)

As a good for-instance, I often build (or, re-build) websites and such that run on VMWare. At one time I would build-out a bunch of small virtual machines. But I since learned to use containers, instead, with very capacious virtual machines. The exposure to VMWare's behaviors – which I typically have little or no control over – is sharply curtailed. Now, I have control over the environment. There are, of course, now "container hosts" which do not use a visible VMWare layer at all: they run honkin' big Linux boxes, and they run your containers directly on them.

wpeckham 02-13-2018 04:51 PM

Isolation techniques and advantages are an interesting field of study.

OpenVZ had superior isolation due to the superior OpenVZ kernel patches up to version 6: version 7 is a total departure and I am unsure how different it is internally. OpenVZ also allowed for superior management of some resources.

LXC based containers were less mature, with somewhat less isolation, but based entirely upon the kernel upstream. LXC has continued to mature, but I am unsure just how well it now isolates functions that could affect the running kernel security. LXC had the additional advantage that it could be used to isolate and entire distro environment, or just ONE process, service, or software environment.

EITHER of them was superior to sandbox and chroot techniques at one time. I have not followed the changes for the last couple of years, and never did follow the LXD improvements from Ubuntu/Canonical.

Summary of my noise (?) : how effective the isolation is for your purpose depends upon your purpose, version, settings, and situation. What is the need, what (and what versions/configuration of) applications are involved, and what is the perceived threat? With those answers and some research one could be ready for some REAL WORLD TESTING: which is the only way you get a definitive answer.


Sounds like fun!

Rosika 02-14-2018 06:18 AM

Hi simosx,

tnx for your answer.
Quote:

[...] or if you are researching malware samples.
O.K., thatīs not my intention.
I was just looking for the principle differences of various isolation techniques.
The links you provided are also quite helpful.

Greetings.
Rosika

Rosika 02-14-2018 06:25 AM

Hi sundialsvcs,

Quote:

Processes running outside of the container environment can perceive the processes that are running in "container mode," but not the other way around.
O.K., I have to admit I didnīt know that. Yet it explains a lot.
The overhead-argument is the one that got me interested in containers in the first place. Not having to power up a VM in the first place in order to quickly launch a certain process is a huge advantage.

Tnx a lot for further clarification.

Greetings.
Rosika

Rosika 02-14-2018 06:37 AM

Hi wpeckham,

Quote:

LXC had the additional advantage that it could be used to isolate and entire distro environment, or just ONE process, service, or software environment.
Tnx for poining that out. Sounds promising.
Quote:

[...] how effective the isolation is for your purpose depends upon your purpose, version, settings, and situation
O.K., I see that thereīs no really simple answer, at least not an universally applicable one.
As it all seems to be depending on the situation taking a good look at various container-technologies and doing some real-world testing seems to be advisable.

Now I know a lot more than I used to.
Thanks again to you and all the other helpers.

Rosika

sundialsvcs 02-14-2018 07:29 AM

To my way of thinking, containers are – as I said – an illusion that's especially intended to isolate processes on the inside of the container from correctly perceiving the world outside of it. (And, to prevent them from consuming more than their allotted share of resources.) But, I think, you trust these processes not to be malicious. They're in a container, and they're not trying to get out.

Since the whole thing is basically a bunch of kernel configuration parameters, with a certain group of processes running with that same set of parameters (that "container") in effect, there is really no overhead. And that's the point. Although virtual machines also rely upon hardware assistance, there's a lot more overhead associated with them. If you don't actually need what only a VM can do, containers are a compelling alternative that can serve ordinary isolation requirements very efficiently.

The fact that they are "ordinary processes running directly on a Linux kernel," even though they're wearing funny glasses and a straitjacket, can also work to your advantage because they can be more easily interacted with from the outside.

Rosika 02-14-2018 09:32 AM

Hi sundialsvcs,

Quote:

But, I think, you trust these processes not to be malicious. They're in a container, and they're not trying to get out
So "trusted" processes (= "normal" ones which I also had no problem in running outside a container or sandbox) are O.K.
Thus better not try running any funny things in containers. I understand.
But (only theoretically): would running them in VMs or firejail provide a higher degree of protection for the host?

Rosika

simosx 02-14-2018 11:24 AM

Quote:

Originally Posted by Rosika (Post 5819780)
So "trusted" processes (= "normal" ones which I also had no problem in running outside a container or sandbox) are O.K.
Thus better not try running any funny things in containers. I understand.
But (only theoretically): would running them in VMs or firejail provide a higher degree of protection for the host?

You may want to run even "trusted" processes in an LXD container, so that they do not mess up with your host (adding repositories, packages and dependencies).
For example, if there is a Nodejs app that you need to run, better put it in a container. Then, you can remove the container and any trace of it is gone.
See, for example, https://blog.simos.info/how-to-insta...lxd-container/

Between LXD and firejail, the latter needs from you to make the correct configuration (profile).
If you make the configuration very restrictive, the process may crash. If you relax the security, it may be too open and miss required restrictions.
There are no known vulnerabilities in the default configuration of LXD. If something appears down the line, it will get fixed quickly.

Rosika 02-14-2018 12:05 PM

Hi simosx,

tnx again.
Your link makes for interesting reading. Thereīs still much for me to learn....
Quote:

Between LXD and firejail, the latter needs from you to make the correct configuration (profile).
Yes, thatīs often a bit of a hassle.
Iīve been using firejail for quite a while now and have learned the hard way that itīs not always working the desired way.

I do not want to be misunderstood: For most applications it works just fine. And there are a lot of profiles (https://github.com/netblue30/firejail/tree/master/etc).
Yet thereīs still a vmplayer.profile missing (though thereīs one for VirtualBox). And I havenīt succeeded in creating the correct configuration for it yet.
So your point is very valid.

Greetings.
Rosika

simosx 02-14-2018 01:28 PM

Hi Rosika,

Let's add a bit more. I hope the discussion remains interesting.
Apart from LXD and firejail, there are also the snap packages that are based on Linux security features.
I would say that firejail is closer to snap packages than to LXD.

With snap packages, an application is described in a configuration file called snapcraft.yaml, and then it is built (from source) into a snap package.
Then, you may upload this snap package to the Ubuntu Store so anyone can make use of it. Snap packages are supported in many major distributions.

Here is the firejail configuration for darktable, https://github.com/netblue30/firejai...ktable.profile
Here is the snapcraft.yaml configuration for darktable, https://github.com/kyrofa/darktable-...snapcraft.yaml
My quick viewing shows me that these are almost equivalent. To make a proper comparison, compare to the plugs section in snapcraft.yaml (which interfaces are allowed).

To install darktable as a snap, you would run

Code:

snap install darktable

Rosika 02-15-2018 08:06 AM

Hi simosx,

Quote:

I hope the discussion remains interesting.
It sure does. Tnx a lot for that.
Iīve heard about snap. Yet itīs not installed by default on my Lubuntu. So it was never really on my mind.
The only thing I knew was that itīs some kind of packet format in order to be used alongside the normal packet management.
What I didnīt know is that it has security mechanisms implemented. So Iīll look into that. Yor links are helpful. :study:

My interest in container-technology stems from the fact that I wanted to get teamviewer running in a sandbox (firejail).
Up and until now it is the only programm that I use that doesnīt work within firejail.
So Iī m looking for alternatives to get teamviewer going in a secure environment. And thus containers came to my mind.

Greetings.
Rosika

P.S.:
If you are interested why teamviewer doesnīt work within firejail:

Terminal-output:
Code:

rosika@rosika-Lenovo-H520e ~> firejail teamviewer
Reading profile /etc/firejail/default.profile
Reading profile /etc/firejail/disable-common.inc
Reading profile /etc/firejail/disable-passwdmgr.inc
Reading profile /etc/firejail/disable-programs.inc

** Note: you can use --noprofile to disable default.profile **

Parent pid 10157, child pid 10158
Child process initialized in 6984.77 ms

Init...
XRandRWait: No value set. Using default.
XRandRWait: Started by user.
Checking setup...
Launching TeamViewer ...
Starting network process (no daemon)
terminate called without an active exception
/opt/teamviewer/tv_bin/script/tvw_exec: Zeile 95:  116 Abgebrochen            "$TV_BIN_DIR/teamviewerd" -n -f
Network process already started (or error)
Launching TeamViewer GUI ...

Parent is shutting down, bye...

Team-viewer itself presents a GUI-based text message:
Quote:

"teamviewer daemon not running.
Please start daemon before using TeamViewer (needs root):
----------teamviewer --daemon start ----------
[...]"
This known problem is discussed on https://github.com/netblue30/firejail/issues/825.
But no solution so far.

simosx 02-15-2018 12:25 PM

Hi Rosika,

I tried as well to get Teamviewer in a LXD container according to the guide at
https://blog.simos.info/how-to-run-g...buntu-desktop/
It did not work in the beginning but now it almost works.
It works locally only and cannot get a connection to the Teamviewer servers. I am mystified as to why it
cannot connect with the Teamviewer servers even if the LXD container has Internet connectivity.
I assume Teamviewer carries lots of baggage that makes it behave weirdly on non-standard systems.

Here is how it works in a LXD container.

1. Set up a LXD container according to
https://blog.simos.info/how-to-run-g...buntu-desktop/

2. Connect to the LXD container with

Code:

$ lxc console guiapps
It has to be through a LXD console for some reason. Otherwise it gives weird errors.

3. Run teamviewer

Code:

ubuntu@guiapps:~$ teamviewer

Init...
CheckCPU: SSE2 support: yes
Checking setup...
Launching TeamViewer ...
Launching TeamViewer GUI ...
ubuntu@guiapps:~$ teamviewer

Init...
CheckCPU: SSE2 support: yes
Checking setup...
Launching TeamViewer ...
Launching TeamViewer GUI ...

https://i.imgur.com/pDHNDsj.png

I did not try all other network connectivity options (use proxy, etc).

On another note, I put online an index of my LXD tutorials,
https://discuss.linuxcontainers.org/...-of-simos/1228

Rosika 02-17-2018 08:21 AM

Hi Simos,

thanks for your reply.
I couldnīt answer yesterday because the linuxquestions-server was down for a while.

O.K., Iīll do the following:
Accordning to your guide for runnng GUI-apps in LCD-containers Iīm going to try to get team-viewer running.
But I doubt that Iīll be more successful than you. Because... why should I? Youīre the professional here. :hattip:
But as I said Iīll give it try.
As soon as I have (or havenīt any) results Iīll post them here.

Thanks also for the index of your LXD-tutorials. Very impressive.

Greetings.
Rosika

simosx 02-19-2018 02:31 PM

Quote:

Originally Posted by Rosika (Post 5820821)
O.K., Iīll do the following:
Accordning to your guide for runnng GUI-apps in LCD-containers Iīm going to try to get team-viewer running.
But I doubt that Iīll be more successful than you. Because... why should I? Youīre the professional here. :hattip:
But as I said Iīll give it try.
As soon as I have (or havenīt any) results Iīll post them here.

Hi Rosika,

I gave it a try again and I have come up to the following:
TeamViewer works in Linux over LXD, as long as you do not use the latest Teamviewer 13.
TeamViewer 13 is based on Qt, and is a departure from the older versions that use Wine.
Using Qt by itself should not be an issue.

I did the easy task and tried out TeamViewer versions 10, 11 and 12. All from https://www.teamviewer.com/en/downlo...ious-versions/
And they just worked. I simply got the TAR files, extracted them and ran TeamViewer.

I hope I can figure out why TeamViewer 13 does not work on LXD.

edit: here is a guide, https://blog.simos.info/how-to-run-teamviewer-in-lxd/

sundialsvcs 02-19-2018 06:22 PM

:scratch: At the moment, the only thing I associate with "trusted container" is the question of whether-or-not the container occupant, when it attempts to "become 'root,'" actually does so on the host machine.

And, as far as I'm concerned, no container should ever be so "trusted." A containerized process should live in its own happy, isolated, world, and should be in every way confined to it. If something needs to be done "to" the actual host environment, IMHO it should only be done "in" that environment.

Rosika 02-22-2018 08:16 AM

Hi simosx,

tnx a lot for your reply. Sorry for the belated answer.

Alas I couldnīt get teamviewer running.
I proceeded as follows:

According to your very well-written guide I got my lxd-container running. I also named it "guiapps". That went well.
Then I installed teamviewer. Yet it was version 13. After reading your latest post I uninstalled it and got the "v11.0.67687"-version from https://www.teamviewer.com/en/downlo...ious-versions/.
That as o.k. as well. But as with version 13 I get an error-message when trying to start it:
ubuntu@guiapps:~/alte_version_teamviewer/teamviewer$ ./teamviewer
Quote:

Init...
*** TeamViewer can not be executed with sudo! ***
Either use your normal user account without sudo
or use a the real root account to log in to your desktop (not recommended!).

chown: changing ownership of '/home/ubuntu/alte_version_teamviewer/teamviewer/logfiles/startup.log': Op
eration not permitted
I logged in my container by using the command lxc exec guiapps -- sudo --login --user ubuntu, as you recommended in your tutorial.
Yet Iīm not quite sure as to what the "sudo"-command does. Am I logged in with sudo? Might that be the cause of denial?

Quote:

lxc console guiapps
doesnīt work with me. I get the error:
Quote:

error: unknown command: console
Itīs a bit of a shame that I cannot get teamviewer running. Could you suggest some way to get this done?

Greetings.
Rosika

Rosika 02-22-2018 08:22 AM

Hi sundialsvcs,

tnx for your comment.
Quote:

A containerized process should live in its own happy, isolated, world, and should be in every way confined to it.
Thatīs a valid point.
So the thing is: How could one prevent processes within the container to become "root"?
If I understand you correctly containerization wouldnīt be the way to go for running untrusted proccesses in an isolated environment.

Greetings.
Rosika

pan64 02-22-2018 08:29 AM

sudo --login --user ubuntu
means: set user as ubuntu, simulate a login. So finally when you start the container it will look like the user ubuntu logged in.
The container initially started as root.

Rosika 02-22-2018 08:55 AM

Hi pan64,

tnx for the explanation.
Quote:

[...] it will look like the user ubuntu logged in.
O.K., I understand.
But if the user ubuntu is logged in that means "normal user", right?
So I fail to understand why I get
Quote:

*** TeamViewer can not be executed with sudo! ***
Either use your normal user account without sudo
or use a the real root account to log in to your desktop (not recommended!).
when trying to start teamviewer.

Greetings.
Rosika

simosx 02-22-2018 09:24 AM

Quote:

Originally Posted by Rosika (Post 5822821)
I logged in my container by using the command lxc exec guiapps -- sudo --login --user ubuntu, as you recommended in your tutorial.
Yet Iīm not quite sure as to what the "sudo"-command does. Am I logged in with sudo? Might that be the cause of denial?


doesnīt work with me. I get the error:

Itīs a bit of a shame that I cannot get teamviewer running. Could you suggest some way to get this done?

Hi Rosika,

Thanks for going through the tutorial!

I tried to get TeamViewer to work with LXD several times before (those times were unsuccessful).
During the tests, I have come up with the opinion that the source of TeamViewer contains a lot of legacy code that makes it behave weirdly.
One of those weird behaviors is exactly the example you are giving. Other weird behaviors were complaining that world-readable files were not readable.

As I write in https://blog.simos.info/how-to-run-teamviewer-in-lxd/ there are three common ways to connect to your LXD container,
  1. lxc console guiapps
  2. ssh ubuntu@10.xx.xx.xx
  3. lxc exec guiapps -- sudo --user ubuntu --login

TeamViewer is so weird that lxc exec is not good enough to run it. You must use lxc console instead.

lxc console is a new command to LXD, therefore if you have Ubuntu 16.04 you would get a somewhat older (but fully supported until 2021) version of LXD.
There are two ways to upgrade to the latest LXD,

One way is to install the snap version of LXD according to the instructions at https://blog.simos.info/how-to-migra...-snap-package/

The other way is to install LXD from the backports repository. To do so, enable the backports repository in Software & Updates (software-properties-gtk). Click to tick the highlighted line that says xenial-backports.
https://i.imgur.com/rSrUMFt.png
and then run
Code:

sudo apt install lxd=2.21-0ubuntu3~17.10.1 lxd-client=2.21-0ubuntu3~17.10.1
This should get you LXD 2.21 which is recent enough for lxc console.

sundialsvcs 02-22-2018 03:14 PM

root is a special case with containers.

Processes which are running in a containerized environment need to have their own personal perspective of what "user-ids" are. Containers handle this by mapping the user-ids that are perceived by the container to the actual user-ids of the host. Obviously, processes from time to time need to switch to a uid=0 context, and to then appear to(!) exercise super-powers within their container.

The question is: "does the host see 'root?'" Do the powers of a "super-user" within a container therefore extend to the host?

If the container is "non-privileged," as it certainly always should be, a process in a container can believe that it is "root" ... when, to the Linux host, it actually isn't. Linux will dutifully report to the process that it has uid=0. Linux will – up to a point – maintain the illusion of "rootliness." But its actual user-id, as seen by the host, might be 123456. (This arbitrary but non-zero value is unknown to it.) In reality, the process cannot exercise root privileges upon the host. But it is king of its little world.

If the container is "privileged," then its user-id really is zero, even on the host.

For obvious reasons, I counsel that a container should n-e-v-e-r be privileged. If you need to do something on the host which actually requires root privileges on the host, do it outside of the container. You can see the mapping that the container cannot see, so you can arrange things to look right when viewed from the container's perspective.

simosx 02-22-2018 03:44 PM

Quote:

Originally Posted by sundialsvcs (Post 5822957)
For obvious reasons, I counsel that a container should n-e-v-e-r be privileged. If you need to do something on the host which actually requires root privileges on the host, do it outside of the container. You can see the mapping that the container cannot see, so you can arrange things to look right when viewed from the container's perspective.

To check whether a LXD container is privileged, run

Quote:

lxc config get guiapps security.privileged
By default, LXD containers are not privileged. You have to set the flag security.privileged in order for them to be such.

Rosika 02-23-2018 06:50 AM

Hi simosx and sundialsvcs,

thank you so much for the explanation.

In the meantime I got teamviewer running within the container, thanks to simosī great tutorial.
As far as the user-id of processes is concerned I have another question though:

Running "ps -ef" in the host gives me amongst other information the following results:
Code:

root      2795    1  1 13:07 ?        00:00:23 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
lxd      2956    1  0 13:07 ?        00:00:00 dnsmasq --strict-order --bind-interfaces --pid-file=/var/lib/lxd/networks/lxdbr0/dnsmasq.pid --e
rosika    3629  2361  4 13:15 ?        00:00:48 terminology
rosika    3633  3629  0 13:15 pts/1    00:00:00 /usr/bin/fish
root      3747    2  0 13:16 ?        00:00:00 [kworker/u16:2]
root      4332  905  0 13:19 ?        00:00:00 /sbin/dhclient -d -q -sf /usr/lib/NetworkManager/nm-dhcp-helper -pf /var/run/dhclient-ens33.pid
root      4705    1  0 13:21 ?        00:00:00 [lxc monitor] /var/lib/lxd/containers guiapps
231072    4726  4705  0 13:21 ?        00:00:00 /sbin/init
231072    4830  4726  0 13:21 ?        00:00:00 /lib/systemd/systemd-journald
231072    4839  4726  0 13:21 ?        00:00:00 /lib/systemd/systemd-udevd
root      4921    2  0 13:21 ?        00:00:00 [kworker/0:5]
231072    5208  4726  0 13:22 ?        00:00:00 /sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /v
231072    5317  4726  0 13:22 ?        00:00:00 /lib/systemd/systemd-logind
231072    5329  4726  0 13:22 ?        00:00:00 /usr/sbin/cron -f
231179    5332  4726  0 13:22 ?        00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
231073    5363  4726  0 13:22 ?        00:00:00 /usr/sbin/atd -f
231176    5364  4726  0 13:22 ?        00:00:00 /usr/sbin/rsyslogd -n
231072    5367  4726  0 13:22 ?        00:00:00 /usr/lib/accountsservice/accounts-daemon
231072    5381  4726  0 13:22 ?        00:00:00 /usr/sbin/sshd -D
231072    5393  4726  0 13:22 ?        00:00:00 /usr/lib/snapd/snapd
root      5428    2  0 13:22 ?        00:00:00 [iscsi_eh]
231072    5509  4726  0 13:22 pts/2    00:00:00 /bin/login --
231072    5566  4726  0 13:22 ?        00:00:00 /usr/lib/policykit-1/polkitd --no-debug
rosika    6164  3633  0 13:23 pts/1    00:00:00 lxc console guiapps
root      6169  2795  0 13:23 ?        00:00:00 /usr/bin/lxd forkconsole guiapps /var/lib/lxd/containers /var/log/lxd/guiapps/lxc.conf tty=0 esc
rosika    6211  4726  0 13:23 ?        00:00:00 /lib/systemd/systemd --user
rosika    6221  6211  0 13:23 ?        00:00:00 (sd-pam)
rosika    6230  5509  0 13:23 pts/2    00:00:00 -bash
rosika    6256  6230  0 13:23 pts/2    00:00:00 fish
rosika    6386  2361  4 13:25 ?        00:00:23 terminology
rosika    6390  6386  0 13:25 pts/4    00:00:00 /usr/bin/fish
root      6646    2  0 13:28 ?        00:00:00 [kworker/u16:0]
root      6650    2  0 13:28 ?        00:00:00 [kworker/u16:3]
rosika    6799  6256  1 13:33 pts/2    00:00:00 c:\TeamViewer\TeamViewer.exe -n
rosika    6930  4726  0 13:33 ?        00:00:00 /home/ubuntu/alte_version_teamviewer/teamviewer/tv_bin/wine/bin/wineserver
rosika    6932  6799  0 13:33 pts/2    00:00:00 /home/ubuntu/alte_version_teamviewer/teamviewer/tv_bin/teamviewerd -n -f
rosika    6965  4726  0 13:33 ?        00:00:00 C:\windows\system32\services.exe
rosika    6970  4726  0 13:33 ?        00:00:00 C:\windows\system32\explorer.exe /desktop
rosika    6976  6799  0 13:33 pts/2    00:00:00 [TeamViewer.exe] <defunct>
rosika    6977  6799  0 13:33 pts/2    00:00:00 /home/ubuntu/alte_version_teamviewer/teamviewer//tv_bin/TVGuiSlave.32 19 6
rosika    6978  6799  0 13:33 pts/2    00:00:00 /home/ubuntu/alte_version_teamviewer/teamviewer//tv_bin/TVGuiDelegate 19 6
root      7122    2  0 13:33 ?        00:00:00 [kworker/0:1]
rosika    7130  6390  0 13:34 pts/4    00:00:00 ps -ef

So I see "rosika" as user-id for "teamviewer" (which is running in the container). Is this right?
Quote:

If the container is "privileged," then its user-id really is zero, even on the host.
O.K., at least the user-id isnīt zero. Thatīs good. But is it alright that it is "rosika". I mean thus I can kill the process from the host (not only from within the container).
Or is it just due to the fact that itīs a GUI-based application (Mapping the user ID ......)

Greetings.
Rosika

sundialsvcs 02-23-2018 07:24 AM

The user-id's 231xxx are most-likely those of the container occupants.

Containers use a mapping-table to map the user-ids seen by the container (including uid=0) into the actual ones known to the host. Container occupants do not know what the mapping is. And you'd need to look in the same place to see if any of them are mapped to rosika.

Also – Wine is a bit of a special case within a containerized environment because it has to have access to the host's X-server. "Google it" for more details.

Yes, because the container processes are, in fact, "Linux processes," the host can kill them. But I suggest that you do it within the appropriate container environment. You don't want to do something that might break the illusion . . .

Rosika 02-23-2018 08:34 AM

Hi sundialsvcs,

tnx a lot.
Info: I forgot to mention: "lxc config get guiapps security.privileged" gave me no output whatsoever. Thus I assume everything is alright and Iīm running an unprivileged container. ;)

Quote:

Yes, because the container processes are, in fact, "Linux processes," the host can kill them. But I suggest that you do it within the appropriate container environment. You don't want to do something that might break the illusion . . .
Well, I did it just once in order to make a correct statement here. Hope it didnīt hurt too much. But thanks for your suggestion.

Yet thereīs still one thing that puzzles me:

When logging into my container by using the command lxc console guiapps I cannot make use of top.
I get the normal display of processes but it more or less stops working immediately. Furthermore it says:
Quote:

Unknown command - try 'h' for help
I have no idea why that is.
Itīs a bit of a puzzler given the fact that when running the container with lxc exec guiapps -- sudo --login --user ubuntu
top works as it should! With no weird behaviour whatsoever.
And itīs similar with htop. This one at least seems to work, but when closing with CTRL+c the terminal shows
Code:

[?64;1;2;6;9;15;18;21;22c
Thatīs also the case with top.
Do you have any idea why (h)top is beahving in such a way and what I could do about it?

Greetings.
Rosika

simosx 02-23-2018 09:00 AM

Hi Rosika and sundialsvcs,

The instructions at https://blog.simos.info/how-to-run-g...buntu-desktop/
have a step that does userid mapping for your non-root user from the host to the container.
It is the part with title Mapping the user ID of the host to the container (PREREQUISITE).

In other words, the processes of the container's non-root user account can be affected by the host's non-root user account. For example, your host's non-root user account can kill a GUI process running in the container.
However, as far as I understand, the process that launched in the container does not have access to the host's filesystem.
The container is not privileged, but there is a hole to enable running GUI apps with graphics acceleration.

The alternative would be to use a separate graphics-accelerated X server like Xephyr, and send the container's output there.
I have not tried this.

Regarding top and htop, they both work for me on the guiapps container and also a vanilla container.
I tried with both gnome-terminal and xterm (of course running these terminal emulators on the host).

Rosika 02-23-2018 09:32 AM

Hi Simos,

tnx again.
Quote:

It is the part with title Mapping the user ID of the host to the container (PREREQUISITE).
Yes, I followed your instructions precisely. I just wanted to make sure that everythingīs running o.k.
And it really seems to do do. :cool:

As far as (h)top is concerned you inspired me to test xterm.
I installed it in the guiapps-container and upon starting it opened a new window (xterminal). I think thatīs due to the fact that we got graphics-accelerated GUI apps running in the container.

And now it worked. Top and htop are both running without complaint even when starting the container with "lxc console guiapps"!
No idea why I couldnīt get it working any other way. Yet Iīm very pleased with that workaround.

Thanks a lot for your help.
And a big thank you to all the other helpers too.

Greetings.
Rosika :party:

sundialsvcs 02-23-2018 11:44 AM

Quote:

Originally Posted by simosx (Post 5823209)
In other words, the processes of the container's non-root user account can be affected by the host's non-root user account. For example, your host's non-root user account can kill a GUI process running in the container.
However, as far as I understand, the process that launched in the container does not have access to the host's filesystem.
The container is not privileged, but there is a hole to enable running GUI apps with graphics acceleration.

The user-ids seen by the container occupant are specific to the container's environment and the container occupant thinks that he has control of them. What he does not know is that they are being mapped to host-side user-ids. (Ditto group-ids.) You determine the mapping when you set up the container. Yes, other processes which are not running with the container's rose-colored-glasses on can also see those processes ... as they actually are.

You also determine – chroot-jail style – what filesystem topology it is able to see. You can also set strict limits on what resources it may consume.

When a "container occupant" process is dispatched by the host, the entire set of kernel-settings that implements these various illusions is put in place for that process every time it runs. Other processes running on the same machine might not have these illusions put in front of their eyes. (Or, they may have a different set of illusions.) But, conceptually, "a container" is: "a cleverly-crafted and efficient illusion." The occupants believe that they know what the real world looks like, but they are totally wrong. (And, they don't care. It's good enough for them.)

Rosika 02-25-2018 08:59 AM

Hi sundialsvcs,

tnx for the detailed explanation.
For the beginner of containerization not everything is easy to understand. Yet the more I read (especially the good explanations and tutorials provided by you and simosx) the more I get it. Itīs a really fascinating field.
Perhaps Iīm a bit paranoid but I think nowadays itīs certainly more important than it used to be to implement as many security mechanisms as possible. ;)

And itīs not just teamviewer I wanted to get running, but that was my main objective in the beginning.
(Thatīs due to the fact that I couldnīt get it running within firejail.)
But running it in the lxd-container is successful. Really fascinating! :cool:

So now that everything works fine thereīs just one thing which puzzles me a bit.

When logging in with "lxc console guiapps" I get the following message:

Code:

run-parts: /etc/update-motd.d/98-fsck-at-reboot exited with return code 1
Iīm not sure what to make of it and what it really means.
Everything is working fine though. So it doesnīt seem to have any consequences. Yet Iīm curious about it.

Do you have any ideas?

Greetings.
Rosika
  

 





  

 



pan64 02-25-2018 09:05 AM

motd is: message of the day (see man motd). You may check what is in that file. The message probably means: when you started the container (which is similar to a boot) fsck was executed, but failed for some reason. You may need to check logs related to it.

Rosika 02-25-2018 09:44 AM

Hi pan64,

tnx for your help.
I found the file "98-fsck-at-reboot"
But it just contains the script:
Code:

#!/bin/sh

if [ -x /usr/lib/update-notifier/update-motd-fsck-at-reboot ]; then
    exec /usr/lib/update-notifier/update-motd-fsck-at-reboot
fi

In /var/log I couldnīt find any logs related to that phenomenon.

Code:

-rw-r--r-- 1 root  root  3184 Feb 25 14:26 alternatives.log
-rw-r----- 1 root  adm    3183 Feb 19 16:36 apport.log
drwxr-xr-x 2 root  root  4096 Jan 26 04:45 apt/
-rw-r----- 1 syslog adm  30479 Feb 25 15:17 auth.log
-rw------- 1 root  utmp  4608 Feb 22 16:35 btmp
-rw-r--r-- 1 syslog adm  931570 Feb 25 15:13 cloud-init.log
-rw-r--r-- 1 root  root  32625 Feb 25 15:13 cloud-init-output.log
drwxr-xr-x 2 root  root  4096 Oct 20 10:35 dist-upgrade/
-rw-r--r-- 1 root  root 257183 Feb 25 14:26 dpkg.log
-rw-r--r-- 1 root  root    510 Feb 19 16:05 fontconfig.log
drwxr-xr-x 2 root  root  4096 Jan 26 04:43 fsck/
-rw-r----- 1 syslog adm    1470 Feb 25 15:13 kern.log
-rw-rw-r-- 1 root  utmp 292292 Feb 25 15:13 lastlog
drwxr-xr-x 2 root  root  4096 Dec  7 21:54 lxd/
-rw-r----- 1 syslog adm  237309 Feb 25 15:28 syslog
drwxr-x--- 2 root  adm    4096 Feb 22 13:38 unattended-upgrades/
-rw-rw-r-- 1 root  utmp  44928 Feb 25 15:13 wtmp

I logged in at around Feb 25 16:15. So no entry there.

But thanks anyway.

Greetings.
Rosika

pan64 02-25-2018 09:56 AM

so most probably the execution of /usr/lib/update-notifier/update-motd-fsck-at-reboot has failed.
there is a dir named fsck, probably you can find something in it (or in kernlog, syslog). Check the date/time of execution in the logs, that may help to find relevant lines.

Rosika 02-25-2018 10:14 AM

Hi again,

I really do appreciate yor help. Tnx.
Yet Iīm sorry to say that I couldnīt find anything helpful.

The fsck-directory has just 2 entries in it:
Code:


ubuntu@guiapps /v/l/fsck> ls -l
total 0
-rw-r----- 1 root adm 0 Jan 26 04:44 checkfs
-rw-r----- 1 root adm 0 Jan 26 04:44 checkroot

Both files are empty.

Neither kern.log nor syslog mention anything remotely connected to fsck.
Iīm a bit at a loss here.
But never mind. At least I know the following (thanks to your info):
Quote:

most probably the execution of /usr/lib/update-notifier/update-motd-fsck-at-reboot has failed
BTW: I found out that Iīm not alone with experiencing such behaviour.
On https://superuser.com/questions/8802...-return-code-1 thereīs
a user with that problem as well. But I think that one may have different causes....

Greetings.
Rosika

simosx 02-25-2018 04:26 PM

When you log in into Ubuntu, you get a motd with fresh information relating to your system.
There are scripts that are running in

Code:

ubuntu@guiapps:~$ ls -l /etc/update-motd.d/
total 8
-rwxr-xr-x 1 root root 1220 Oct 22  2015 00-header
-rwxr-xr-x 1 root root 1157 Jun 14  2016 10-help-text
-rwxr-xr-x 1 root root  334 Sep 19 19:17 51-cloudguest
-rwxr-xr-x 1 root root  97 May 24  2016 90-updates-available
-rwxr-xr-x 1 root root  299 Jul 22  2016 91-release-upgrade
-rwxr-xr-x 1 root root  111 May 11  2017 97-overlayroot
-rwxr-xr-x 1 root root  142 May 24  2016 98-fsck-at-reboot
-rwxr-xr-x 1 root root  144 May 24  2016 98-reboot-required
ubuntu@guiapps:~$

For example, 90-updates-available will check whether updates are available, and report back what security updates and other updates are there.

Code:

98-fsck-at-reboot
reports back to motd whether a fsck is scheduled, or required.
It does not make sense for a container to deal with fsck issues because these are handled by the host.
Therefore, the issue here is that the script should return back 0 (no error) and effectively be silent.
That is, we have found a bug and need to report it somewhere.

Where shall we report it to?
Quote:

ubuntu@guiapps:~$ dpkg -S /etc/update-motd.d/98-fsck-at-reboot
update-notifier-common: /etc/update-motd.d/98-fsck-at-reboot
$ dpkg -S /usr/lib/update-notifier/update-motd-fsck-at-reboot
update-notifier-common: /usr/lib/update-notifier/update-motd-fsck-at-reboot
ubuntu@guiapps:~$
So, here is the project page, https://launchpad.net/ubuntu/+source/update-notifier
There is a tab there called Bugs where we could report this.
Something like "When logging into a container through the console, we get that error".

Ideally, we should figure out where in /usr/lib/update-notifier/update-motd-fsck-at-reboot do we get the error. But how?

We can edit the first line at /usr/lib/update-notifier/update-motd-fsck-at-reboot and change into
Code:

#!/bin/bash -x
The -x says to show tracing information.

Let's log in again through the console.
Code:

$ lxc console guiapps
To detach from the console, press: <ctrl>+a q

Ubuntu 16.04.3 LTS guiapps console

guiapps login: ubuntu
Password:
Last login: Sun Feb 25 22:01:12 UTC 2018 on console
+ set -e
+ '[' '' = --force ']'
+ stamp=/var/lib/update-notifier/fsck-at-reboot
+ '[' -e /var/lib/update-notifier/fsck-at-reboot ']'
++ stat -c %Y /var/lib/update-notifier/fsck-at-reboot
+ stampt=1519592650
+++ awk '{print $1}' /proc/uptime
++ date -d 'now - 16759.00 seconds' +%s
+ last_boot=1519579337
++ date +%s
+ now=1519596096
+ '[' 1519596250 -lt 1519596096 ']'
+ '[' 1519592650 -gt 1519596096 ']'
+ '[' 1519592650 -lt 1519579337 ']'
+ '[' -n '' ']'
+ cat /var/lib/update-notifier/fsck-at-reboot
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.13.0-36-generic x86_64)
...

Hmm, we do not get the error now. What's wrong? Perhaps we need to restart the container first?
Code:

$ lxc stop guiapps
$ lxc start guiapps
$ lxc console guiapps
To detach from the console, press: <ctrl>+a q

Ubuntu 16.04.3 LTS guiapps console

guiapps login: ubuntu
Password:
Last login: Sun Feb 25 22:01:36 UTC 2018 on console
+ set -e
+ '[' '' = --force ']'
+ stamp=/var/lib/update-notifier/fsck-at-reboot
+ '[' -e /var/lib/update-notifier/fsck-at-reboot ']'
++ stat -c %Y /var/lib/update-notifier/fsck-at-reboot
+ stampt=1519592650
+++ awk '{print $1}' /proc/uptime
++ date -d 'now - 61.00 seconds' +%s
+ last_boot=1519596455
++ date +%s
+ now=1519596516
+ '[' 1519596250 -lt 1519596516 ']'
+ NEEDS_FSCK_CHECK=yes
+ '[' -n yes ']'
+ check_occur_any=
++ awk '$5 ~ /^ext(2|3|4)$/ { print $1 }'
++ mount
+ ext_partitions='/dev/sda5
/dev/sda5
/dev/sda1
/dev/sda1
/dev/sda1
/dev/sda1
/dev/sda1'
+ for part in '$ext_partitions'
++ dumpe2fs -h /dev/sda5
+ dumpe2fs_out='Couldn'\''t find valid filesystem superblock.'
run-parts: /etc/update-motd.d/98-fsck-at-reboot exited with return code 1
ubuntu@guiapps:~$

The command that fails is dumpe2fs -h /dev/sda5. That's the root device of the host...
I have not dug deeper than this. I suppose that it relates to the changes we did to guiapps
in order to run GUI apps. guiapps can see the device name but cannot access it, hence the error.
I tried with a normal container and I do not get the issue.

Therefore, if it is an issue with only guiapps and similar such containers, then I can say that the message
can be safely ignored.

Rosika 02-26-2018 08:43 AM

Hi simosx,

thank you so much for your very detailed explanation. You put a lot of work into it. Very much appreciated. :)

I followed it step by step.
Quote:

[...] guiapps can see the device name but cannot access it, hence the error. [...] Therefore, if it is an issue with only guiapps and similar such containers, then I can say that the message can be safely ignored.
O.K. Thatīs the explanation then.
So to sum up: nothing to be really worried about. Fine!
Quote:

I tried with a normal container and I do not get the issue.
Yeah, itīs the same with me. When I log into the container with "lxc exec guiapps -- sudo --login --user ubuntu" I donīt get this error-message either. Just when using "lxc console guiapps".

Glad this could be sorted out.

I probably forgot to mention that Iīve been testing these lxd-containers by running them within my virtual machine (bodhi linux, 32 bit).
Before installing them on my productive system (Lubuntu, 64 bit) I thought it might be a good idea to get those containers running in my VM. If itīs successful there and I have no problems handling them then theyīll probably work to my satisfaction on my host-system as well.

At the beginning I was considering docker but soon learned that it requires a 64-bit host to run on.
Therefore I decided on lxd-containers. And now they really work fine. :)


So a big thank you to you Simos and all the other helpers.
Iīm so glad about the fantastic help I got.

Greetings.
Rosika


All times are GMT -5. The time now is 12:04 AM.