LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Home Server Setup recommendations (https://www.linuxquestions.org/questions/linux-newbie-8/home-server-setup-recommendations-4175539199/)

virtuozzi 04-09-2015 06:06 AM

Home Server Setup recommendations
 
Hi Guys & Girls

I'm planning to reuse my old hw for a home server based on debian.
I want to provide my lan with 4 to 5 services like router/dhcp, tftp, pxe/fw, nfs-server, media-server, clonzilla-server, squid-dansguardian and eventually later on a small mta/webmail, horde solution.

How would you set this up?

An all-in-one machine running natively on the hw or separate vms @ kvm?
I would also like to have a stable and good manageable backup architecture, like using a central clonezilla server instance to pick up everything from the other services.

Greetings and thanks a lot

vi

wpeckham 04-09-2015 06:30 AM

I have done this.
 
You could go very fancy with this, but I would not.
Unless you want to have to mess with this server a LOT, go simple.
Make a server with each of the services you need, but run (where possible) in an LXC container to isolate them.
This gives you a simple, manageable server with secure service isolation, but without the performance and complexity issues of full virtualization.

[EDIT] complete server isolation using LXC is possible, but overkill for this application.[/EDIT]

T3RM1NVT0R 04-09-2015 06:38 AM

Quote:

I want to provide my lan with 4 to 5 services like router/dhcp, tftp, pxe/fw, nfs-server, media-server, clonzilla-server, squid-dansguardian and eventually later on a small mta/webmail, horde solution.

How would you set this up?
I will definitely not go with putting all of them on one server. I will go with VMs but I will club the services. Running two many VMs will obviously affect the performance and you don't even know if all of them will be used to fullest. So if I have to do such setup I will go with the following:

1. router/dhcp, tftp, pxe/fw, squid-dansguardian on 1 VM.
2. nfs-server, media-server, 1 VM.
3. clonzilla-server a separate VM as this will be used for backup.
4. Will leave enough room for 1 VM to be setup later as MTA/webmail, horde solution.

virtuozzi 04-09-2015 07:25 AM

Yes indeed, i also thought about virtualization.
The input about containers is also great, should take a look at it.

Thanks a lot for confirming these ideas and for the quick responses!

rtmistler 04-09-2015 07:27 AM

Quote:

Originally Posted by T3RM1NVT0R (Post 5344819)
I will definitely not go with putting all of them on one server. I will go with VMs but I will club the services. Running two many VMs will obviously affect the performance and you don't even know if all of them will be used to fullest. So if I have to do such setup I will go with the following:

1. router/dhcp, tftp, pxe/fw, squid-dansguardian on 1 VM.
2. nfs-server, media-server, 1 VM.
3. clonzilla-server a separate VM as this will be used for backup.
4. Will leave enough room for 1 VM to be setup later as MTA/webmail, horde solution.

Not anywhere near a sysadmin type myself. But can you explain the reasoning why VM's would be the recommended way to go here? I mean, classically someone would load a bunch of services on the same instance of the machine. OK so now you're using that same machine to make virtual instances. But it's all still running on the same machine. The difference is in the scheduling. How is it any different that one machine instance is any worse than that same machine running 3 or more virtual machines, then each of those running one or more services under them? Seems to me that it is an attempt at making the load virtual in that say DHCP and TFTP are sparsely used, so that VM doesn't get run much, and the media server might need to run a lot so that VM might get run a lot.

I'm just wondering how the use of VM's is any more beneficial than just using the original machine and having a bunch of services available on it.

T3RM1NVT0R 04-09-2015 07:35 AM

Quote:

Originally Posted by rtmistler (Post 5344847)
Not anywhere near a sysadmin type myself. But can you explain the reasoning why VM's would be the recommended way to go here? I mean, classically someone would load a bunch of services on the same instance of the machine. OK so now you're using that same machine to make virtual instances. But it's all still running on the same machine. The difference is in the scheduling. How is it any different that one machine instance is any worse than that same machine running 3 or more virtual machines, then each of those running one or more services under them? Seems to me that it is an attempt at making the load virtual in that say DHCP and TFTP are sparsely used, so that VM doesn't get run much, and the media server might need to run a lot so that VM might get run a lot.

I'm just wondering how the use of VM's is any more beneficial than just using the original machine and having a bunch of services available on it.

The reason I prefer to put them under VMs is that if I do any configuration changes it wouldn't be affecting the whole machine just the VM. Basically it gives me more control when it comes to management. Running too many things on the same machine becomes difficult to manage. For example: If I have got some issue with NFS on the machine that requires a reboot, so if I have it on base machine I am affecting other services as well. If I am running NFS under a VM I can simply reboot that VM leaving other services unaffected.

virtuozzi 04-09-2015 02:29 PM

Quote:

Originally Posted by T3RM1NVT0R (Post 5344851)
The reason I prefer to put them under VMs is that if I do any configuration changes it wouldn't be affecting the whole machine just the VM. Basically it gives me more control when it comes to management. Running too many things on the same machine becomes difficult to manage. For example: If I have got some issue with NFS on the machine that requires a reboot, so if I have it on base machine I am affecting other services as well. If I am running NFS under a VM I can simply reboot that VM leaving other services unaffected.

yap, the abstraction makes you less vulnerable to misconfiguration, patches, upgrades etc.
messed up one vm, but not the whole system!

but honestly i checked out LXC this afternoon and i would say this is even better and more efficient than classic virtualization.

T3RM1NVT0R 04-09-2015 02:41 PM

I just had a cursory look over LXC and indeed it looks great. I haven't tried it though and that is the reason I suggested on VMs. Now that I know something about LXC I will definitely give it a try.

+1 to wpeckham for bringing up this topic!!!

suicidaleggroll 04-09-2015 03:01 PM

Another benefit of VMs is they can run anywhere. The OP said this was on some old hardware, what happens when that hardware fails? If it's running everything, then all of those services fail with it. He needs to set up the same services from scratch (maybe re-using a config file here or there, if he can, depending on OS versions) on other machines in the household to keep everything from coming to a screeching halt, and and when he sets up a new machine he has to set up all of the services again from scratch on the new server (maybe re-using a config file here or there, if he can, depending on OS versions).

VMs can be shuffled around anywhere. If the main server fails, just grab the latest VM snapshot from your backups and boot it up on another machine to take over in the interim. When a new server is built, move the VMs back onto it. Downtime is negligible, provided you have other machines the VMs could be loaded onto in an emergency (laptops, HTPC, etc.).

For example, I have my DNS server in its own VM. When the server hosting it needs some maintenance, I just shut down the DNS VM, boot up the latest copy on practically any other machine on the network, and then do my maintenance on the server. When I'm done, I shut down the DNS VM on the interim host, boot it back up on the server, and nobody even notices. You can do the same thing with any other service - NFS, Samba, DHCP, FTP, etc.

I'm not sure how much of that applies to LXC containers though.

T3RM1NVT0R 04-09-2015 03:06 PM

Quote:

Originally Posted by suicidaleggroll (Post 5345084)
Another benefit of VMs is they can run anywhere. The OP said this was on some old hardware, what happens when that hardware fails? If it's running everything, then all of those services fail with it. He needs to set up the same services from scratch (maybe re-using a config file here or there, if he can, depending on OS versions) on other machines in the household to keep everything from coming to a screeching halt, and and when he sets up a new machine he has to set up all of the services again from scratch on the new server (maybe re-using a config file here or there, if he can, depending on OS versions).

VMs can be shuffled around anywhere. If the main server fails, just grab the latest VM snapshot from your backups and boot it up on another machine to take over in the interim. When a new server is built, move the VMs back onto it. Downtime is negligible, provided you have other machines the VMs could be loaded onto in an emergency (laptops, HTPC, etc.).

For example, I have my DNS server in its own VM. When the server hosting it needs some maintenance, I just shut down the DNS VM, boot up the latest copy on practically any other machine on the network, and then do my maintenance on the server. When I'm done, I shut down the DNS VM on the interim host, boot it back up on the server, and nobody even notices. You can do the same thing with any other service - NFS, Samba, DHCP, FTP, etc.

I'm not sure how much of that applies to LXC containers though.

Good catch, regarding the old hardware.

@ virtuozzi: As you said it is old hardware not sure if you will be running the latest OS which supports LXC. If it does then it's good else you can always switch to old school.

TobiSGD 04-09-2015 05:19 PM

I see no reason why LXC shouldn't run on older hardware (all you need is a somewhat recent kernel, IIRC), in fact, I would always prefer containers on older hwardware over VMs, especially because of the lower resource usage and the simple fact that older hardware may lack support for hardware virtualization.
By the way, when looking at container technology you may also want to evaluate Docker or, if your host OS happens to use systemd, systemd-nspawn.

wpeckham 04-10-2015 06:22 AM

reprise.
 
Containers are light and impact performance far less than virtualization. That was my first thought, that the old hardware would support this better than full V.

The second thing was that on old hardware backups are going to be critical, and it really will not matter if you use virtualization, native services, or containers. When the hardware fails, you will have to have a plan and resources to move forward anyway. I should have stated as much.

I have used all discussed solutions daily (at work) and found them all useful and reliable. Right now I am not using containers at work, but they continue to help secure my home server. Using containers I get more services running on the same hardware.

Note: I would not run ALL services in containers, unless all services were exposed to possible security threat vectors. I would run quite secure services native, and only those that might be a vulnerability in a container.

The key is simple configurations that are easy to back up and replicate, difficult to get wrong or break.

If I HAD to go tighter containers, go OpenVZ. It is more mature then LXC, but requires a special (patched) kernel. It has a very light footprint - like LXC (both run between 1% and 3% overhead compared to 10% - 30% for vmware), but greater isolation - much like full-V. The features (container backup, clone, restore, migration !without downtime! between OpenVZ servers, etc.) make it a serious tool for consideration.

Note: both LXC and OpenVZ are free. The best vmware is not, the free parts have more overhead and less power, and are overkill for old hardware anyway. There are other nice options, but none as close to native performance as the kernel based options LXC and OpenVZ. I love VirtualBox, but it runs with MUCH higher overhead. My advice was based on simple and fast for old hardware.

PS. I like that idea of clustering services. Not appropriate for lxc as much as for OpenVZ or a full-V solution, it provides the isolation with less disk overhead. Good thinking!

virtuozzi 04-10-2015 04:54 PM

The hw im gonna use for this home server is actually not that old (i5-3570k, 16gb ram, 256 ssd).
Should be suitable for both, containers and vms.

OpenVZ is also very interesting, also wanted to give it a try.

Am i getting it right about Docker, thats it's use case is more single-application containers, whereas lxc is emulating the whole os with the "support" for multiple services/apps?

T3RM1NVT0R 04-10-2015 05:33 PM

Quote:

The hw im gonna use for this home server is actually not that old (i5-3570k, 16gb ram, 256 ssd).
Should be suitable for both, containers and vms.
To be honest I wouldn't call that old at all. ;) and as far as I am aware you can upgrade i5 to i7 on the same motherboard. I have i5 system and the motherboard in use is compatible to upgrade to i7.

For the docker and container part I will leave it to other members to comment on as I have not used them yet.

TobiSGD 04-10-2015 09:01 PM

Quote:

Originally Posted by virtuozzi (Post 5345631)
Am i getting it right about Docker, thats it's use case is more single-application containers, whereas lxc is emulating the whole os with the "support" for multiple services/apps?

You can use both for both purposes. Docker is mostly used in a way that you have a single image for the OS that is used in your containers and and your applications are running in an overlaying filesystem on top of that (using mechanisms like the btrfs filesystem, AUFS, or overlayfs). This is quite easy to maintain, but you can do the same with LXC or systemd-nspawn. AFAIK, you are right that LXC seems to be more used for separate containers that all contain a separate complete OS (as complete as needed for the application), but it is not limited to that, and Docker and systemd-nspawn can be used in that way also. Cn't say anything about OpenVZ, never tried that.


All times are GMT -5. The time now is 04:28 AM.