general question: container isolation-level
Hi altogether,
Iīve got a general question about Linux containers, in particlar with regard to LXD/LXC. When running programms/processes within an LXC-container: to what extent are they isolated from the host system? I mean I can run programs within a virtual machine (with a certain amount of overhead) or run programs within a sandbox (like firejail). In those cases isolation from the host seems to be quite effective, at least to my knowledge. How does the isolation-level of those compare to LXD/LXC? Thanks in advance for any information. Greetings Rosika :scratch: P.S.: system: Linux/Lubuntu 16.04.3 LTS, 64 bit |
I have no idea what do you mean by quite effective, but I think the isolation level is quite effective on LXD/LXC too.
Do you have any special use case, issue or example to discuss? |
Quote:
That is, it will boot the Linux kernel and then start the rest of the operating system. For usability, Virtualbox offers a trick to share the graphics card with the host operating system. This is required if we want hardware acceleration for graphics. If we enable 3D acceleration, we can even play 3D games in Virtualbox. In addition, when you install the Guest Additions, you let more communication between the VM and your host. This is good for usability, because you can, for example, copy/paste between the host and the VM. However, it can lead to security vulnerabilities such as those described at https://www.techrepublic.com/article...in-virtualbox/ As always with security vulnerabilities, you install the updates and you are OK. LXC 1.0, LXD and firejail use security features (namespaces, cgroups) of the Linux kernel in order to run processes isolated from the rest of the system. The end result is different, and you have the choice to select which one is better for you. The big difference that LXD gives you, is that you create machine containers. A machine container is like a virtual machine. You start the machine container and it boots up, gets an IP address and is ready to use. The machine container shares the same Linux kernel as the host, therefore it does not boot a separate Linux kernel. Because of that, you can have ten times more machine containers on the same server than virtual machines. And if you create machine containers with distros like Alpine, you can fit even more machine containers. The Linux kernel security features provide the protection and isolation of what is going on inside the container. The container can have no access at all at the host, even no network connectivity. If you use the LXD installation defaults, then the containers will get a private IP address. The containers by default are not accessible from the local network, and they cannot access directly computers from your local (i.e. home) network. They get Internet connectivity though. However, to make the best use of LXD, you may want to relax a bit the restrictions. For example, if you follow the guide at https://blog.simos.info/how-to-make-...from-your-lan/ you can make some of the containers to appear as standalone hosts on your local network. That is, if you computer has IP 192.168.1.10, then you can get three containers exposed (and accessible) to the local network with IP addresses 192.168.1.11, 192.168.1.12, 192.168.1.13. No need for separate physical computers, you can just use machine containers. It is up to use to make sure that these special machine containers are secure (do not install bad software). Also, you can run GUI apps in a machine container according to the guide at https://blog.simos.info/how-to-run-g...buntu-desktop/ For example, you can run Steam inside the container with full graphics acceleration. The way it works, is that the machine container has also full access to the X session. This means that you should not run unknown GUI programs in that way. On the other hand, it's up to you to use Xephyr instead, which provides isolation. Overall, LXD is in the same group with LXC 1.0, firejail and others. Depending on your needs, it may be suitable to select LXD over the others. |
In my experience, host processes are very well hidden from the container, container processes are not well hidden from the host. It somewhat depends upon what access to the process information worries you.
|
Hi pan64,
thanks for your answer. My question was rather of theoretical nature. I more or less wnated to know whether a process running within a container is as secure as running it in a sandbox like firejail. Secure in the sense of providing a "shield" against unwanted changes of the host. Greetings. Rosika |
Hi wpeckham,
thank you for the answer. I have no particular scenario in mind. I just wanted to see if there are any differences in general as compared to a sandbox (firejail). Cheerio. Rosika |
Hi simosx,
thank you so much for your very detailed answer and the interesting links. I also read about the principle differences between virtual machines (based on a hypervisor etc) and containers. But the way you pointed out the important points is very informative and good to read. I already installed lxd within a virtual machine (xubuntu 16.04.3 LTS, 32 bit) and "played around" with it in order to gain some experience before installing it on my host (Lubuntu 16.04.3 LTS). In a whitepaper provided by canonical ("For CTOs: the no-nonsense way to accelerate your business with containers") it says on page 10: Quote:
The author talks about about the possibility of the kernel being crashed "due to a vulnerability within an application which itself is insecure". Iīm not quite sure what he/she means by that. What application? The one running within the container or the one running on the host-system? I mean is it as easy as that for the containerized application to get out of its container and do damage to the kernel? Therefore my question. Greetings. Rosika |
containers use the kernel of their host, they have no "own" kernels. So if there was an app which can harm the system most probably it will crash the host (running several hosts), not only the container itself.
From the other hand if there was a kernel vulnerability you need to upgrade all of your hosts, including VMs, therefore you need to reboot/restart all of them (including bare metals, VMs and containers). |
Hi pan64,
thanks again for your reply. Quote:
So is it safe to say that running an "untrusted" application in a virtual machine or even a sandbox for that matter provides a higher degree of safety? Rosika |
Hi Rosika,
Quote:
The quote refers more to mission-critical applications, or if you are researching malware samples. In that case, you would probably get hardware that supports both VT-x and VT-d, and definitely use a VM. With LXD, you can set up restrictions to the amount of computing power, memory and disk space for a container. In that way, you can control a container that happens to be accessible from outsiders. For example, have a look at https://linuxcontainers.org/lxd/try-it/ There, you can get a shell into an LXD container and using for free for 30 minutes. In this LXD container, you get your own nested LXD container that you can use to launch your own containers. You can give it a go and see how well it works. The source code of this demo server is also available, https://insights.ubuntu.com/2016/04/...xd-in-lxd-812/ |
I like to use the analogy that "containers are a very clever illusion." :)
They bring together several different Linux kernel facilities to create an environment in which a process can believe that it knows what the filesystem looks like, and it believes that it knows what its user-id is, and it believes that it can become root when it wants to. And so on. But, none of this is actually the truth. The container occupant's "rose-colored glasses" view of the world is actually mapped onto the actual environment of the Linux host, but the container occupants can't see that. Processes running outside of the container environment can perceive the processes that are running in "container mode," but not the other way around. So what this gives you is ... "good isolation, cheaply." You don't have the overhead of a virtual machine. You do have the isolation that you need. Although it isn't the same kind of isolation that a VM provides, it is very often good enough. (The overhead of a virtual-machine environment is quite noticeable when you don't have it.) As a good for-instance, I often build (or, re-build) websites and such that run on VMWare. At one time I would build-out a bunch of small virtual machines. But I since learned to use containers, instead, with very capacious virtual machines. The exposure to VMWare's behaviors – which I typically have little or no control over – is sharply curtailed. Now, I have control over the environment. There are, of course, now "container hosts" which do not use a visible VMWare layer at all: they run honkin' big Linux boxes, and they run your containers directly on them. |
Isolation techniques and advantages are an interesting field of study.
OpenVZ had superior isolation due to the superior OpenVZ kernel patches up to version 6: version 7 is a total departure and I am unsure how different it is internally. OpenVZ also allowed for superior management of some resources. LXC based containers were less mature, with somewhat less isolation, but based entirely upon the kernel upstream. LXC has continued to mature, but I am unsure just how well it now isolates functions that could affect the running kernel security. LXC had the additional advantage that it could be used to isolate and entire distro environment, or just ONE process, service, or software environment. EITHER of them was superior to sandbox and chroot techniques at one time. I have not followed the changes for the last couple of years, and never did follow the LXD improvements from Ubuntu/Canonical. Summary of my noise (?) : how effective the isolation is for your purpose depends upon your purpose, version, settings, and situation. What is the need, what (and what versions/configuration of) applications are involved, and what is the perceived threat? With those answers and some research one could be ready for some REAL WORLD TESTING: which is the only way you get a definitive answer. Sounds like fun! |
Hi simosx,
tnx for your answer. Quote:
I was just looking for the principle differences of various isolation techniques. The links you provided are also quite helpful. Greetings. Rosika |
Hi sundialsvcs,
Quote:
The overhead-argument is the one that got me interested in containers in the first place. Not having to power up a VM in the first place in order to quickly launch a certain process is a huge advantage. Tnx a lot for further clarification. Greetings. Rosika |
Hi wpeckham,
Quote:
Quote:
As it all seems to be depending on the situation taking a good look at various container-technologies and doing some real-world testing seems to be advisable. Now I know a lot more than I used to. Thanks again to you and all the other helpers. Rosika |
To my way of thinking, containers are – as I said – an illusion that's especially intended to isolate processes on the inside of the container from correctly perceiving the world outside of it. (And, to prevent them from consuming more than their allotted share of resources.) But, I think, you trust these processes not to be malicious. They're in a container, and they're not trying to get out.
Since the whole thing is basically a bunch of kernel configuration parameters, with a certain group of processes running with that same set of parameters (that "container") in effect, there is really no overhead. And that's the point. Although virtual machines also rely upon hardware assistance, there's a lot more overhead associated with them. If you don't actually need what only a VM can do, containers are a compelling alternative that can serve ordinary isolation requirements very efficiently. The fact that they are "ordinary processes running directly on a Linux kernel," even though they're wearing funny glasses and a straitjacket, can also work to your advantage because they can be more easily interacted with from the outside. |
Hi sundialsvcs,
Quote:
Thus better not try running any funny things in containers. I understand. But (only theoretically): would running them in VMs or firejail provide a higher degree of protection for the host? Rosika |
Quote:
For example, if there is a Nodejs app that you need to run, better put it in a container. Then, you can remove the container and any trace of it is gone. See, for example, https://blog.simos.info/how-to-insta...lxd-container/ Between LXD and firejail, the latter needs from you to make the correct configuration (profile). If you make the configuration very restrictive, the process may crash. If you relax the security, it may be too open and miss required restrictions. There are no known vulnerabilities in the default configuration of LXD. If something appears down the line, it will get fixed quickly. |
Hi simosx,
tnx again. Your link makes for interesting reading. Thereīs still much for me to learn.... Quote:
Iīve been using firejail for quite a while now and have learned the hard way that itīs not always working the desired way. I do not want to be misunderstood: For most applications it works just fine. And there are a lot of profiles (https://github.com/netblue30/firejail/tree/master/etc). Yet thereīs still a vmplayer.profile missing (though thereīs one for VirtualBox). And I havenīt succeeded in creating the correct configuration for it yet. So your point is very valid. Greetings. Rosika |
Hi Rosika,
Let's add a bit more. I hope the discussion remains interesting. Apart from LXD and firejail, there are also the snap packages that are based on Linux security features. I would say that firejail is closer to snap packages than to LXD. With snap packages, an application is described in a configuration file called snapcraft.yaml, and then it is built (from source) into a snap package. Then, you may upload this snap package to the Ubuntu Store so anyone can make use of it. Snap packages are supported in many major distributions. Here is the firejail configuration for darktable, https://github.com/netblue30/firejai...ktable.profile Here is the snapcraft.yaml configuration for darktable, https://github.com/kyrofa/darktable-...snapcraft.yaml My quick viewing shows me that these are almost equivalent. To make a proper comparison, compare to the plugs section in snapcraft.yaml (which interfaces are allowed). To install darktable as a snap, you would run Code:
snap install darktable |
Hi simosx,
Quote:
Iīve heard about snap. Yet itīs not installed by default on my Lubuntu. So it was never really on my mind. The only thing I knew was that itīs some kind of packet format in order to be used alongside the normal packet management. What I didnīt know is that it has security mechanisms implemented. So Iīll look into that. Yor links are helpful. :study: My interest in container-technology stems from the fact that I wanted to get teamviewer running in a sandbox (firejail). Up and until now it is the only programm that I use that doesnīt work within firejail. So Iī m looking for alternatives to get teamviewer going in a secure environment. And thus containers came to my mind. Greetings. Rosika P.S.: If you are interested why teamviewer doesnīt work within firejail: Terminal-output: Code:
rosika@rosika-Lenovo-H520e ~> firejail teamviewer Quote:
But no solution so far. |
Hi Rosika,
I tried as well to get Teamviewer in a LXD container according to the guide at https://blog.simos.info/how-to-run-g...buntu-desktop/ It did not work in the beginning but now it almost works. It works locally only and cannot get a connection to the Teamviewer servers. I am mystified as to why it cannot connect with the Teamviewer servers even if the LXD container has Internet connectivity. I assume Teamviewer carries lots of baggage that makes it behave weirdly on non-standard systems. Here is how it works in a LXD container. 1. Set up a LXD container according to https://blog.simos.info/how-to-run-g...buntu-desktop/ 2. Connect to the LXD container with Code:
$ lxc console guiapps 3. Run teamviewer Code:
ubuntu@guiapps:~$ teamviewer I did not try all other network connectivity options (use proxy, etc). On another note, I put online an index of my LXD tutorials, https://discuss.linuxcontainers.org/...-of-simos/1228 |
Hi Simos,
thanks for your reply. I couldnīt answer yesterday because the linuxquestions-server was down for a while. O.K., Iīll do the following: Accordning to your guide for runnng GUI-apps in LCD-containers Iīm going to try to get team-viewer running. But I doubt that Iīll be more successful than you. Because... why should I? Youīre the professional here. :hattip: But as I said Iīll give it try. As soon as I have (or havenīt any) results Iīll post them here. Thanks also for the index of your LXD-tutorials. Very impressive. Greetings. Rosika |
Quote:
I gave it a try again and I have come up to the following: TeamViewer works in Linux over LXD, as long as you do not use the latest Teamviewer 13. TeamViewer 13 is based on Qt, and is a departure from the older versions that use Wine. Using Qt by itself should not be an issue. I did the easy task and tried out TeamViewer versions 10, 11 and 12. All from https://www.teamviewer.com/en/downlo...ious-versions/ And they just worked. I simply got the TAR files, extracted them and ran TeamViewer. I hope I can figure out why TeamViewer 13 does not work on LXD. edit: here is a guide, https://blog.simos.info/how-to-run-teamviewer-in-lxd/ |
:scratch: At the moment, the only thing I associate with "trusted container" is the question of whether-or-not the container occupant, when it attempts to "become 'root,'" actually does so on the host machine.
And, as far as I'm concerned, no container should ever be so "trusted." A containerized process should live in its own happy, isolated, world, and should be in every way confined to it. If something needs to be done "to" the actual host environment, IMHO it should only be done "in" that environment. |
Hi simosx,
tnx a lot for your reply. Sorry for the belated answer. Alas I couldnīt get teamviewer running. I proceeded as follows: According to your very well-written guide I got my lxd-container running. I also named it "guiapps". That went well. Then I installed teamviewer. Yet it was version 13. After reading your latest post I uninstalled it and got the "v11.0.67687"-version from https://www.teamviewer.com/en/downlo...ious-versions/. That as o.k. as well. But as with version 13 I get an error-message when trying to start it: ubuntu@guiapps:~/alte_version_teamviewer/teamviewer$ ./teamviewer Quote:
Yet Iīm not quite sure as to what the "sudo"-command does. Am I logged in with sudo? Might that be the cause of denial? Quote:
Quote:
Greetings. Rosika |
Hi sundialsvcs,
tnx for your comment. Quote:
So the thing is: How could one prevent processes within the container to become "root"? If I understand you correctly containerization wouldnīt be the way to go for running untrusted proccesses in an isolated environment. Greetings. Rosika |
sudo --login --user ubuntu
means: set user as ubuntu, simulate a login. So finally when you start the container it will look like the user ubuntu logged in. The container initially started as root. |
Hi pan64,
tnx for the explanation. Quote:
But if the user ubuntu is logged in that means "normal user", right? So I fail to understand why I get Quote:
Greetings. Rosika |
Quote:
Thanks for going through the tutorial! I tried to get TeamViewer to work with LXD several times before (those times were unsuccessful). During the tests, I have come up with the opinion that the source of TeamViewer contains a lot of legacy code that makes it behave weirdly. One of those weird behaviors is exactly the example you are giving. Other weird behaviors were complaining that world-readable files were not readable. As I write in https://blog.simos.info/how-to-run-teamviewer-in-lxd/ there are three common ways to connect to your LXD container,
TeamViewer is so weird that lxc exec is not good enough to run it. You must use lxc console instead. lxc console is a new command to LXD, therefore if you have Ubuntu 16.04 you would get a somewhat older (but fully supported until 2021) version of LXD. There are two ways to upgrade to the latest LXD, One way is to install the snap version of LXD according to the instructions at https://blog.simos.info/how-to-migra...-snap-package/ The other way is to install LXD from the backports repository. To do so, enable the backports repository in Software & Updates (software-properties-gtk). Click to tick the highlighted line that says xenial-backports. https://i.imgur.com/rSrUMFt.png and then run Code:
sudo apt install lxd=2.21-0ubuntu3~17.10.1 lxd-client=2.21-0ubuntu3~17.10.1 |
root is a special case with containers.
Processes which are running in a containerized environment need to have their own personal perspective of what "user-ids" are. Containers handle this by mapping the user-ids that are perceived by the container to the actual user-ids of the host. Obviously, processes from time to time need to switch to a uid=0 context, and to then appear to(!) exercise super-powers within their container. The question is: "does the host see 'root?'" Do the powers of a "super-user" within a container therefore extend to the host? If the container is "non-privileged," as it certainly always should be, a process in a container can believe that it is "root" ... when, to the Linux host, it actually isn't. Linux will dutifully report to the process that it has uid=0. Linux will – up to a point – maintain the illusion of "rootliness." But its actual user-id, as seen by the host, might be 123456. (This arbitrary but non-zero value is unknown to it.) In reality, the process cannot exercise root privileges upon the host. But it is king of its little world. If the container is "privileged," then its user-id really is zero, even on the host. For obvious reasons, I counsel that a container should n-e-v-e-r be privileged. If you need to do something on the host which actually requires root privileges on the host, do it outside of the container. You can see the mapping that the container cannot see, so you can arrange things to look right when viewed from the container's perspective. |
Quote:
Quote:
|
Hi simosx and sundialsvcs,
thank you so much for the explanation. In the meantime I got teamviewer running within the container, thanks to simosī great tutorial. As far as the user-id of processes is concerned I have another question though: Running "ps -ef" in the host gives me amongst other information the following results: Code:
root 2795 1 1 13:07 ? 00:00:23 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log Quote:
Or is it just due to the fact that itīs a GUI-based application (Mapping the user ID ......) Greetings. Rosika |
The user-id's 231xxx are most-likely those of the container occupants.
Containers use a mapping-table to map the user-ids seen by the container (including uid=0) into the actual ones known to the host. Container occupants do not know what the mapping is. And you'd need to look in the same place to see if any of them are mapped to rosika. Also – Wine is a bit of a special case within a containerized environment because it has to have access to the host's X-server. "Google it" for more details. Yes, because the container processes are, in fact, "Linux processes," the host can kill them. But I suggest that you do it within the appropriate container environment. You don't want to do something that might break the illusion . . . |
Hi sundialsvcs,
tnx a lot. Info: I forgot to mention: "lxc config get guiapps security.privileged" gave me no output whatsoever. Thus I assume everything is alright and Iīm running an unprivileged container. ;) Quote:
Yet thereīs still one thing that puzzles me: When logging into my container by using the command lxc console guiapps I cannot make use of top. I get the normal display of processes but it more or less stops working immediately. Furthermore it says: Quote:
Itīs a bit of a puzzler given the fact that when running the container with lxc exec guiapps -- sudo --login --user ubuntu top works as it should! With no weird behaviour whatsoever. And itīs similar with htop. This one at least seems to work, but when closing with CTRL+c the terminal shows Code:
[?64;1;2;6;9;15;18;21;22c Do you have any idea why (h)top is beahving in such a way and what I could do about it? Greetings. Rosika |
Hi Rosika and sundialsvcs,
The instructions at https://blog.simos.info/how-to-run-g...buntu-desktop/ have a step that does userid mapping for your non-root user from the host to the container. It is the part with title Mapping the user ID of the host to the container (PREREQUISITE). In other words, the processes of the container's non-root user account can be affected by the host's non-root user account. For example, your host's non-root user account can kill a GUI process running in the container. However, as far as I understand, the process that launched in the container does not have access to the host's filesystem. The container is not privileged, but there is a hole to enable running GUI apps with graphics acceleration. The alternative would be to use a separate graphics-accelerated X server like Xephyr, and send the container's output there. I have not tried this. Regarding top and htop, they both work for me on the guiapps container and also a vanilla container. I tried with both gnome-terminal and xterm (of course running these terminal emulators on the host). |
Hi Simos,
tnx again. Quote:
And it really seems to do do. :cool: As far as (h)top is concerned you inspired me to test xterm. I installed it in the guiapps-container and upon starting it opened a new window (xterminal). I think thatīs due to the fact that we got graphics-accelerated GUI apps running in the container. And now it worked. Top and htop are both running without complaint even when starting the container with "lxc console guiapps"! No idea why I couldnīt get it working any other way. Yet Iīm very pleased with that workaround. Thanks a lot for your help. And a big thank you to all the other helpers too. Greetings. Rosika :party: |
Quote:
You also determine – chroot-jail style – what filesystem topology it is able to see. You can also set strict limits on what resources it may consume. When a "container occupant" process is dispatched by the host, the entire set of kernel-settings that implements these various illusions is put in place for that process every time it runs. Other processes running on the same machine might not have these illusions put in front of their eyes. (Or, they may have a different set of illusions.) But, conceptually, "a container" is: "a cleverly-crafted and efficient illusion." The occupants believe that they know what the real world looks like, but they are totally wrong. (And, they don't care. It's good enough for them.) |
Hi sundialsvcs,
tnx for the detailed explanation. For the beginner of containerization not everything is easy to understand. Yet the more I read (especially the good explanations and tutorials provided by you and simosx) the more I get it. Itīs a really fascinating field. Perhaps Iīm a bit paranoid but I think nowadays itīs certainly more important than it used to be to implement as many security mechanisms as possible. ;) And itīs not just teamviewer I wanted to get running, but that was my main objective in the beginning. (Thatīs due to the fact that I couldnīt get it running within firejail.) But running it in the lxd-container is successful. Really fascinating! :cool: So now that everything works fine thereīs just one thing which puzzles me a bit. When logging in with "lxc console guiapps" I get the following message: Code:
run-parts: /etc/update-motd.d/98-fsck-at-reboot exited with return code 1 Everything is working fine though. So it doesnīt seem to have any consequences. Yet Iīm curious about it. Do you have any ideas? Greetings. Rosika                     |
motd is: message of the day (see man motd). You may check what is in that file. The message probably means: when you started the container (which is similar to a boot) fsck was executed, but failed for some reason. You may need to check logs related to it.
|
Hi pan64,
tnx for your help. I found the file "98-fsck-at-reboot" But it just contains the script: Code:
#!/bin/sh Code:
-rw-r--r-- 1 root root 3184 Feb 25 14:26 alternatives.log But thanks anyway. Greetings. Rosika |
so most probably the execution of /usr/lib/update-notifier/update-motd-fsck-at-reboot has failed.
there is a dir named fsck, probably you can find something in it (or in kernlog, syslog). Check the date/time of execution in the logs, that may help to find relevant lines. |
Hi again,
I really do appreciate yor help. Tnx. Yet Iīm sorry to say that I couldnīt find anything helpful. The fsck-directory has just 2 entries in it: Code:
Neither kern.log nor syslog mention anything remotely connected to fsck. Iīm a bit at a loss here. But never mind. At least I know the following (thanks to your info): Quote:
On https://superuser.com/questions/8802...-return-code-1 thereīs a user with that problem as well. But I think that one may have different causes.... Greetings. Rosika |
When you log in into Ubuntu, you get a motd with fresh information relating to your system.
There are scripts that are running in Code:
ubuntu@guiapps:~$ ls -l /etc/update-motd.d/ Code:
98-fsck-at-reboot It does not make sense for a container to deal with fsck issues because these are handled by the host. Therefore, the issue here is that the script should return back 0 (no error) and effectively be silent. That is, we have found a bug and need to report it somewhere. Where shall we report it to? Quote:
There is a tab there called Bugs where we could report this. Something like "When logging into a container through the console, we get that error". Ideally, we should figure out where in /usr/lib/update-notifier/update-motd-fsck-at-reboot do we get the error. But how? We can edit the first line at /usr/lib/update-notifier/update-motd-fsck-at-reboot and change into Code:
#!/bin/bash -x Let's log in again through the console. Code:
$ lxc console guiapps Code:
$ lxc stop guiapps I have not dug deeper than this. I suppose that it relates to the changes we did to guiapps in order to run GUI apps. guiapps can see the device name but cannot access it, hence the error. I tried with a normal container and I do not get the issue. Therefore, if it is an issue with only guiapps and similar such containers, then I can say that the message can be safely ignored. |
Hi simosx,
thank you so much for your very detailed explanation. You put a lot of work into it. Very much appreciated. :) I followed it step by step. Quote:
So to sum up: nothing to be really worried about. Fine! Quote:
Glad this could be sorted out. I probably forgot to mention that Iīve been testing these lxd-containers by running them within my virtual machine (bodhi linux, 32 bit). Before installing them on my productive system (Lubuntu, 64 bit) I thought it might be a good idea to get those containers running in my VM. If itīs successful there and I have no problems handling them then theyīll probably work to my satisfaction on my host-system as well. At the beginning I was considering docker but soon learned that it requires a 64-bit host to run on. Therefore I decided on lxd-containers. And now they really work fine. :) So a big thank you to you Simos and all the other helpers. Iīm so glad about the fantastic help I got. Greetings. Rosika |
All times are GMT -5. The time now is 12:04 AM. |