LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 04-09-2015, 06:06 AM   #1
virtuozzi
LQ Newbie
 
Registered: Apr 2015
Location: Switzerland
Distribution: Debian, Ubuntu, CentOS
Posts: 10

Rep: Reputation: Disabled
Home Server Setup recommendations


Hi Guys & Girls

I'm planning to reuse my old hw for a home server based on debian.
I want to provide my lan with 4 to 5 services like router/dhcp, tftp, pxe/fw, nfs-server, media-server, clonzilla-server, squid-dansguardian and eventually later on a small mta/webmail, horde solution.

How would you set this up?

An all-in-one machine running natively on the hw or separate vms @ kvm?
I would also like to have a stable and good manageable backup architecture, like using a central clonezilla server instance to pick up everything from the other services.

Greetings and thanks a lot

vi
 
Old 04-09-2015, 06:30 AM   #2
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,627

Rep: Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695
I have done this.

You could go very fancy with this, but I would not.
Unless you want to have to mess with this server a LOT, go simple.
Make a server with each of the services you need, but run (where possible) in an LXC container to isolate them.
This gives you a simple, manageable server with secure service isolation, but without the performance and complexity issues of full virtualization.

[EDIT] complete server isolation using LXC is possible, but overkill for this application.[/EDIT]

Last edited by wpeckham; 04-09-2015 at 06:32 AM. Reason: Thinking too slow before morning coffee
 
1 members found this post helpful.
Old 04-09-2015, 06:38 AM   #3
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, SLES, CentOS, Red Hat
Posts: 2,385

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Quote:
I want to provide my lan with 4 to 5 services like router/dhcp, tftp, pxe/fw, nfs-server, media-server, clonzilla-server, squid-dansguardian and eventually later on a small mta/webmail, horde solution.

How would you set this up?
I will definitely not go with putting all of them on one server. I will go with VMs but I will club the services. Running two many VMs will obviously affect the performance and you don't even know if all of them will be used to fullest. So if I have to do such setup I will go with the following:

1. router/dhcp, tftp, pxe/fw, squid-dansguardian on 1 VM.
2. nfs-server, media-server, 1 VM.
3. clonzilla-server a separate VM as this will be used for backup.
4. Will leave enough room for 1 VM to be setup later as MTA/webmail, horde solution.
 
Old 04-09-2015, 07:25 AM   #4
virtuozzi
LQ Newbie
 
Registered: Apr 2015
Location: Switzerland
Distribution: Debian, Ubuntu, CentOS
Posts: 10

Original Poster
Rep: Reputation: Disabled
Yes indeed, i also thought about virtualization.
The input about containers is also great, should take a look at it.

Thanks a lot for confirming these ideas and for the quick responses!
 
Old 04-09-2015, 07:27 AM   #5
rtmistler
Moderator
 
Registered: Mar 2011
Location: USA
Distribution: MINT Debian, Angstrom, SUSE, Ubuntu, Debian
Posts: 9,882
Blog Entries: 13

Rep: Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930
Quote:
Originally Posted by T3RM1NVT0R View Post
I will definitely not go with putting all of them on one server. I will go with VMs but I will club the services. Running two many VMs will obviously affect the performance and you don't even know if all of them will be used to fullest. So if I have to do such setup I will go with the following:

1. router/dhcp, tftp, pxe/fw, squid-dansguardian on 1 VM.
2. nfs-server, media-server, 1 VM.
3. clonzilla-server a separate VM as this will be used for backup.
4. Will leave enough room for 1 VM to be setup later as MTA/webmail, horde solution.
Not anywhere near a sysadmin type myself. But can you explain the reasoning why VM's would be the recommended way to go here? I mean, classically someone would load a bunch of services on the same instance of the machine. OK so now you're using that same machine to make virtual instances. But it's all still running on the same machine. The difference is in the scheduling. How is it any different that one machine instance is any worse than that same machine running 3 or more virtual machines, then each of those running one or more services under them? Seems to me that it is an attempt at making the load virtual in that say DHCP and TFTP are sparsely used, so that VM doesn't get run much, and the media server might need to run a lot so that VM might get run a lot.

I'm just wondering how the use of VM's is any more beneficial than just using the original machine and having a bunch of services available on it.
 
1 members found this post helpful.
Old 04-09-2015, 07:35 AM   #6
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, SLES, CentOS, Red Hat
Posts: 2,385

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Quote:
Originally Posted by rtmistler View Post
Not anywhere near a sysadmin type myself. But can you explain the reasoning why VM's would be the recommended way to go here? I mean, classically someone would load a bunch of services on the same instance of the machine. OK so now you're using that same machine to make virtual instances. But it's all still running on the same machine. The difference is in the scheduling. How is it any different that one machine instance is any worse than that same machine running 3 or more virtual machines, then each of those running one or more services under them? Seems to me that it is an attempt at making the load virtual in that say DHCP and TFTP are sparsely used, so that VM doesn't get run much, and the media server might need to run a lot so that VM might get run a lot.

I'm just wondering how the use of VM's is any more beneficial than just using the original machine and having a bunch of services available on it.
The reason I prefer to put them under VMs is that if I do any configuration changes it wouldn't be affecting the whole machine just the VM. Basically it gives me more control when it comes to management. Running too many things on the same machine becomes difficult to manage. For example: If I have got some issue with NFS on the machine that requires a reboot, so if I have it on base machine I am affecting other services as well. If I am running NFS under a VM I can simply reboot that VM leaving other services unaffected.
 
Old 04-09-2015, 02:29 PM   #7
virtuozzi
LQ Newbie
 
Registered: Apr 2015
Location: Switzerland
Distribution: Debian, Ubuntu, CentOS
Posts: 10

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by T3RM1NVT0R View Post
The reason I prefer to put them under VMs is that if I do any configuration changes it wouldn't be affecting the whole machine just the VM. Basically it gives me more control when it comes to management. Running too many things on the same machine becomes difficult to manage. For example: If I have got some issue with NFS on the machine that requires a reboot, so if I have it on base machine I am affecting other services as well. If I am running NFS under a VM I can simply reboot that VM leaving other services unaffected.
yap, the abstraction makes you less vulnerable to misconfiguration, patches, upgrades etc.
messed up one vm, but not the whole system!

but honestly i checked out LXC this afternoon and i would say this is even better and more efficient than classic virtualization.
 
Old 04-09-2015, 02:41 PM   #8
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, SLES, CentOS, Red Hat
Posts: 2,385

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
I just had a cursory look over LXC and indeed it looks great. I haven't tried it though and that is the reason I suggested on VMs. Now that I know something about LXC I will definitely give it a try.

+1 to wpeckham for bringing up this topic!!!
 
Old 04-09-2015, 03:01 PM   #9
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Another benefit of VMs is they can run anywhere. The OP said this was on some old hardware, what happens when that hardware fails? If it's running everything, then all of those services fail with it. He needs to set up the same services from scratch (maybe re-using a config file here or there, if he can, depending on OS versions) on other machines in the household to keep everything from coming to a screeching halt, and and when he sets up a new machine he has to set up all of the services again from scratch on the new server (maybe re-using a config file here or there, if he can, depending on OS versions).

VMs can be shuffled around anywhere. If the main server fails, just grab the latest VM snapshot from your backups and boot it up on another machine to take over in the interim. When a new server is built, move the VMs back onto it. Downtime is negligible, provided you have other machines the VMs could be loaded onto in an emergency (laptops, HTPC, etc.).

For example, I have my DNS server in its own VM. When the server hosting it needs some maintenance, I just shut down the DNS VM, boot up the latest copy on practically any other machine on the network, and then do my maintenance on the server. When I'm done, I shut down the DNS VM on the interim host, boot it back up on the server, and nobody even notices. You can do the same thing with any other service - NFS, Samba, DHCP, FTP, etc.

I'm not sure how much of that applies to LXC containers though.
 
1 members found this post helpful.
Old 04-09-2015, 03:06 PM   #10
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, SLES, CentOS, Red Hat
Posts: 2,385

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Quote:
Originally Posted by suicidaleggroll View Post
Another benefit of VMs is they can run anywhere. The OP said this was on some old hardware, what happens when that hardware fails? If it's running everything, then all of those services fail with it. He needs to set up the same services from scratch (maybe re-using a config file here or there, if he can, depending on OS versions) on other machines in the household to keep everything from coming to a screeching halt, and and when he sets up a new machine he has to set up all of the services again from scratch on the new server (maybe re-using a config file here or there, if he can, depending on OS versions).

VMs can be shuffled around anywhere. If the main server fails, just grab the latest VM snapshot from your backups and boot it up on another machine to take over in the interim. When a new server is built, move the VMs back onto it. Downtime is negligible, provided you have other machines the VMs could be loaded onto in an emergency (laptops, HTPC, etc.).

For example, I have my DNS server in its own VM. When the server hosting it needs some maintenance, I just shut down the DNS VM, boot up the latest copy on practically any other machine on the network, and then do my maintenance on the server. When I'm done, I shut down the DNS VM on the interim host, boot it back up on the server, and nobody even notices. You can do the same thing with any other service - NFS, Samba, DHCP, FTP, etc.

I'm not sure how much of that applies to LXC containers though.
Good catch, regarding the old hardware.

@ virtuozzi: As you said it is old hardware not sure if you will be running the latest OS which supports LXC. If it does then it's good else you can always switch to old school.
 
Old 04-09-2015, 05:19 PM   #11
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
I see no reason why LXC shouldn't run on older hardware (all you need is a somewhat recent kernel, IIRC), in fact, I would always prefer containers on older hwardware over VMs, especially because of the lower resource usage and the simple fact that older hardware may lack support for hardware virtualization.
By the way, when looking at container technology you may also want to evaluate Docker or, if your host OS happens to use systemd, systemd-nspawn.
 
Old 04-10-2015, 06:22 AM   #12
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,627

Rep: Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695
reprise.

Containers are light and impact performance far less than virtualization. That was my first thought, that the old hardware would support this better than full V.

The second thing was that on old hardware backups are going to be critical, and it really will not matter if you use virtualization, native services, or containers. When the hardware fails, you will have to have a plan and resources to move forward anyway. I should have stated as much.

I have used all discussed solutions daily (at work) and found them all useful and reliable. Right now I am not using containers at work, but they continue to help secure my home server. Using containers I get more services running on the same hardware.

Note: I would not run ALL services in containers, unless all services were exposed to possible security threat vectors. I would run quite secure services native, and only those that might be a vulnerability in a container.

The key is simple configurations that are easy to back up and replicate, difficult to get wrong or break.

If I HAD to go tighter containers, go OpenVZ. It is more mature then LXC, but requires a special (patched) kernel. It has a very light footprint - like LXC (both run between 1% and 3% overhead compared to 10% - 30% for vmware), but greater isolation - much like full-V. The features (container backup, clone, restore, migration !without downtime! between OpenVZ servers, etc.) make it a serious tool for consideration.

Note: both LXC and OpenVZ are free. The best vmware is not, the free parts have more overhead and less power, and are overkill for old hardware anyway. There are other nice options, but none as close to native performance as the kernel based options LXC and OpenVZ. I love VirtualBox, but it runs with MUCH higher overhead. My advice was based on simple and fast for old hardware.

PS. I like that idea of clustering services. Not appropriate for lxc as much as for OpenVZ or a full-V solution, it provides the isolation with less disk overhead. Good thinking!

Last edited by wpeckham; 04-10-2015 at 06:44 AM. Reason: before morning coffee....
 
Old 04-10-2015, 04:54 PM   #13
virtuozzi
LQ Newbie
 
Registered: Apr 2015
Location: Switzerland
Distribution: Debian, Ubuntu, CentOS
Posts: 10

Original Poster
Rep: Reputation: Disabled
The hw im gonna use for this home server is actually not that old (i5-3570k, 16gb ram, 256 ssd).
Should be suitable for both, containers and vms.

OpenVZ is also very interesting, also wanted to give it a try.

Am i getting it right about Docker, thats it's use case is more single-application containers, whereas lxc is emulating the whole os with the "support" for multiple services/apps?
 
Old 04-10-2015, 05:33 PM   #14
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, SLES, CentOS, Red Hat
Posts: 2,385

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Quote:
The hw im gonna use for this home server is actually not that old (i5-3570k, 16gb ram, 256 ssd).
Should be suitable for both, containers and vms.
To be honest I wouldn't call that old at all. and as far as I am aware you can upgrade i5 to i7 on the same motherboard. I have i5 system and the motherboard in use is compatible to upgrade to i7.

For the docker and container part I will leave it to other members to comment on as I have not used them yet.
 
Old 04-10-2015, 09:01 PM   #15
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by virtuozzi View Post
Am i getting it right about Docker, thats it's use case is more single-application containers, whereas lxc is emulating the whole os with the "support" for multiple services/apps?
You can use both for both purposes. Docker is mostly used in a way that you have a single image for the OS that is used in your containers and and your applications are running in an overlaying filesystem on top of that (using mechanisms like the btrfs filesystem, AUFS, or overlayfs). This is quite easy to maintain, but you can do the same with LXC or systemd-nspawn. AFAIK, you are right that LXC seems to be more used for separate containers that all contain a separate complete OS (as complete as needed for the application), but it is not limited to that, and Docker and systemd-nspawn can be used in that way also. Cn't say anything about OpenVZ, never tried that.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
New home server setup. david0321 Linux - Server 7 07-16-2012 04:24 PM
How might I Setup Home Web Server plus File Server plus Printer Sharing brindamo Linux - Server 3 01-14-2008 12:48 AM
home server with no-ip setup LinuxCrazy Linux - Server 2 05-14-2007 04:07 PM
Home Server Setup The Cello Fellow Linux - General 12 10-03-2006 02:43 PM
Newbie setup for home server MasterCephus Linux - Newbie 1 07-21-2004 09:36 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 02:37 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration