Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
i am quite curious as to why you would run 5 or 6 virtual machines, one for DHCP, one for DNS, one for NFS ect ect ect instead of a single server that has all those server applications in a single install of linux. Doesn't all the overhead of the linux kernel running on 6 virtual machines give you worse performance?
"Doesn't all the overhead of the linux kernel running on 6 virtual machines give you worse performance?" -
if your servers specs make this a problem, you should upgrade your servers or run fewer machines per node. Also for KVM-based virtualization we have KSM and for OpenVZ/LXC this does not apply as there is a single kernel.
Distribution: openSuSE, Fedora, CentOS, Debian,, and others
Posts: 84
Rep:
I would have to add the following reasons to run multiple machines running single services
-- Network Design testing
with the right hardware node you can create a complete office environment for testing network designs to ensure flow of traffic and QoS is optimized prior to implimentation
-- Platform stability testing
This allows you to test the latest patches and updates prior to implementing in production environment
-- Creation of Backup Medium
makes it much easier to create and maintain backups of entire servers
-- Hardware node testing
allows you to test the hardware fully to ensure that you have a stable kernel and stable hardware for the security of knowing your hardware is not just gonna fail as you have fully tested in development phase
I am sure there are other reasons why you should implement this type of topology. I have slowly started moving this way with my personal network which allows me to swap out hardware much easier as I don't have to take down all my services just to upgrade the RAM on my Web Server.
Virtualization may bring complexity to your setup rather than all service in one OS.
But one basic thing is isolation.
You may also refer basic benefits http://www.itworld.com/nls_windowsserver050411
I had the same question in the beginning. And in fact, I even started configuring all in one machine. But, as time went on, as others mentioned, I found it very difficult to maintain the server. Moreover, when you change/alter/upgrade physical server, transferring VMs comes very handy. You do not have to go through the tedious process of configuring a server from scratch (in your case you should re-configure all the services). Problem in one service affected all the others ie. downtime for maintenance of one service means downtime for all. I do not remember any particular example to point out straight away. If you are designing server for a very busy environment, you better isolate your servers. OpenVZ is one good solution, if you have low spec hardware.
Here we have a three server architecture, web server, API server, database server. In our production environment these are separate physical machines, however for development and testing we don't need that level of hardware so by using VMs for dev and testing we can run these separate instances (configured the same as "production") on a single server saving money and electricity without compromising performance. Also means we need less rack space at the office
For a vast majority of applications even "entry level" server hardware is more than capable of running multiple VMs to allow a split of services.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.