Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm currently teaching myself programming, and so have gone from quite a static list of installed programs, to trying out different things, and sometimes feeling like my environment has become quite a mess (as it has currently) cluttered with things I don't use.
On Windows, I would use Altiris to help me avoid this situation, but on Linux I'm uncertain what to do.
I would like an experimental space, that I can reset back to a similar/identical state than what I had when I first installed (Fedora 24 XFCE) with great ease.
I would think the solution might be to create new users, and delete them when I'm done experimenting, but two issues I foresee with this is a. I might want to keep the particular changes I've made, if they work perfectly and b. sometimes something asks for root permission, and I'm not sure everything is always entirely contained within a users home folder (when installing things). I could be mistaken...
I suppose what I want to do is not disimilar to a git experimental branch, that I can later merge in or delete.
It would also be fantastic if I could make parts of my setup portable (akin to PortableApps for Windows) so when I find work as a programmer, I can easily bring parts of my setup with me (if they allow that).
I would much appreciate some advice on this topic, please ask me questions too if I haven't been clear enough about my purpose.
Specifically, I want to be able to install something, play around with its configuration, or maybe fail to set it up properly, and then tear it out if it's not working out or if I want to make a second attempt to set it up, having learnt through trial and error how to install it correctly.
Certainly PART of what you want is doable. I am NOT sure you can have a protected and isolated sandbox that also replicates into your host environment on demand. IF you do not mind re-installing and re-configuring projects into your host when it proves out in isolation, there are at least three solutions that come to mind at once:
1. Full virtual machines using qemu or xen. Excellent separation but difficult to port things into the host.
2. OpenVZ containers. This is what I use on CentOS hosts in version 6 and below. Very good isolation and easier migration of code trees between container and host.
3. LCX / Linux Containers. These can contain either a full server container, as the others do, or just one process environment. Not quite as secure, but powerful and compatible with the newest kernels.
I would be comfortable setting up entire production environments in either qemu or OpenVZ, but not in LXC. For something like your purpose, LXC might be just the ticket.
Another option is to set up a virtual machine using VirtualBox on top of your main operating system. It would entail you having to install the operating system in the virtual machine, but the advantage is that you can make virtual machine snapshots at any time and revert to them if required. What I do, for example, is run a VirtualBox virtual machine containing my operating system (Linux Mint 18 MATE) and use that as a sandbox to play about with new software, configs etc. When I'm happy, I adopt the results into my main host operating system and revert my VM to a pre-playaround state.
Additionally, you could just make backups of your root and home partitions at various points, and revert to these if required. Fine-grain control it isn't, but it does allow you to play about to your heart's content in the knowledge that it is easy to revert to your last known good setup.
Thank you both, very much I was aware of virtual machines, amongst my reasons for not using them the main one is my computer isn't powerful enough (it's 8+ years old, but works just fine for me - I don't run much aside Vim, Shell, Opera and Anki).
Either openVZ or LXC would suit my purposes well.
However, from what I've read, they're Operating System level containers, and seem to be a SysOps topic if I were to learn about them.
Docker on the other hand seems more of a DevOps topic, and has application level containers.
I could be mistaken about this, but I'm going to go the Docker route anyway - it's something I've more often seen in job advertisements as a "nice to have" and it seems like it can solve my problem (I rediscovered it when reading about LXC, which it does/used to make use of).
If you're up for the challenge of rolling your own solution - possibly as a learning experience - you could set up multiple OS partitions and use rsync as desired to copy from one to another. If you want a _real_ challenge, you can use aufs to dramatically reduce disk space requirements (rather than storing the entire OS, aufs lets you have a root partition that only stores the differences).
The basic steps would be:
1) Create several partitions about the same size. It doesn't need to be exact, because you won't be using true cloning to copy from one to another. (Doing so can cause confusion because a true clone will clone the partition's UUID.)
2) Install baseline OS on one partition.
3) Use something like the following to pseudo-clone the baseline to another partition. This CAN be done from the OS while it's running! Then edit the new etc/fstab so it will use the correct root partition.
Code:
cp -vax /. /media/thewyzewun/sda5/
vi media/thewyzewun/sda5/etc/fstab
When editing the new fstab file, replace the root partition's UUID. If you're unsure what the UUID is, use the command "blkid" to list UUIDs.
4) run "update-grub" to update /boot/grub/grub.cfg. This should automatically detect the new copy of Linux and create a boot menu entry for it.
5) Later on, you can do a pseudo-clone to another partition with an already existing copy with the following command:
rsync is like a copy which skips over files with identical timestamp. Note that this command will skip etc/fstab, so you won't have to edit the UUID again.
This method is, of course, a more low level solution without any bells or whistles. But it is VERY efficient - good for pathetic old hardware. And you have full control over what's going on. And like I said, it can be a good learning experience. You can investigate the meaning of the options to gain some insight into what all is necessary to do a proper linux OS clone (-v -a -x -A -X --delete).
Oh - one more thought. In addition to changing etc/fstab, you could also modify etc/hostname and the desktop background image. That way, the login prompt and background image will remind you which partition you're booted into at any time.
For example, you could name etc/hostname on the different partitions something like:
devobox1
devobox5
devobox6
devobox7
And you could set the background image to /home/thewyzewun/Pictures/background.jpg. Copy different image files to background.jpg on the different partitions. To exclude these files from the rsync command, you'd use:
for things like this taken to an EXTREME there is a bit of scientific software that pushes this to a near extreme
"NeoGeography Toolkit" https://github.com/NeoGeographyToolkit
It sounds like what you are looking for, but you may actually prefer to use a VM, as that allows you to do oddball things to the kerne - and if they fail, and the system doesn't run, is easier to rollback to a snapshot.
Thank you both, very much I was aware of virtual machines, amongst my reasons for not using them the main one is my computer isn't powerful enough (it's 8+ years old, but works just fine for me - I don't run much aside Vim, Shell, Opera and Anki).
Either openVZ or LXC would suit my purposes well.
However, from what I've read, they're Operating System level containers, and seem to be a SysOps topic if I were to learn about them.
Docker on the other hand seems more of a DevOps topic, and has application level containers.
I could be mistaken about this, but I'm going to go the Docker route anyway - it's something I've more often seen in job advertisements as a "nice to have" and it seems like it can solve my problem (I rediscovered it when reading about LXC, which it does/used to make use of).
Let me clear up some misunderstanding here. LXC is Linux Containers. A container can hold a single process, or an entire server image, but it is not true virtualization. Think of it as that little chroot kid, all grown up.
OpenVZ is like full on olympic level LXC. Containers with better isolation, more and different managed parameters, still containers but more nearly like virtualization.
Advantage containers, 3% or less total overhead (so near native iron performance) and you can directly copy or install code trees from the container onto the host if you wish.
Qemu and Virtualbox, real virtualization with even better separation, but difficult to migrate anything in the guest to the host. It acts a bit more like a separate machine. Also, real virtualization had much higher overhead. Generally on the range of 10% to 30% depending upon what you are doing.
I am thinking that you should not need true virtualization for your development purposes, but a chroot container might serve. LXC has gotten good enough that it is almost easier and faster to create and LXC container than to properly and completely configure a chroot jail with the properties you require. After giving it some thought, I would rather develop in OpenVZ containers but only because I have used OpenVZ a lot and could do that easily and quickly. The better and more modern answer would be LXC, and that is what I recommend.
Docker once used LXC containers and could be configured for OpenVZ containers: but that was some time ago and I have no idea where that has gone. I would think that using Docker for this would work, but might be as much overkill (in a way) and full on virtual machines. It could be made to work, but is not really the right tool for the job.
I would go simple, and not try to master the separation tools you choose. Make sure they do the job and use them to accomplish your goal: isolation of your development environments. If they work, and you enjoy using them, you will come to master them quite naturally and painlessly over time. Your primary focus should be on your development goals and mastering that creative process and tools.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.