Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
In our current server environment there are a couple of servers which need periodic backup. These servers should under no circumstances go down.
What I would like to know if it is possible to backup (clone) the harddrives while they are up and running.
Based on what I read it seems that either dd or rsync combined with a cron job should suit this need.
Is this correct and are there any (easier and more user friendly) alternatives?
'dd' will do the job but in the end it's just a snapshot of a running system that has already changed when the backup is still being created.
If this is not about replacing the servers with new hardware, but rather a backup solution to have a corrupted server replaced within minutes by exchanging the Hard Disk, even then you should be aware that hard disk failures are not the only things that can break. So the fastest replacements time for all situations is provided by having an exact hardware clone of each server on your priority list.
Having such hardware twins I would rather constantly copy all contents by using 'duplicity' (http://www.nongnu.org/duplicity/) or any other similar backup tool, by applying changes on an hourly interval.
The 'Duplicity' package can handle remote backups. It avoids single large files by splitting the backup and keeping checksums to test for changes.
Duplicity seems over the top.
We currently already have an exact same machine as a secundary system in case the first goes down. The only issue that currently exists is the posibility to easily synchronize the secundary system with the primary system.
Both need exactly the same configuration (as if they were the same), with the exception of the services that are currently running (due to licensing limitations and issues surrounding maximum connections).
It seems rsync might suffice and easily fit in our current environment.
I do find rsync to be just the ticket but, I don't really feel a need to put it in a cron job. It doesn't take long to run and I can keep an eye on it for errors.
For example, I made a dupe of my fc8 partition so I can try out the new beta...
as root user
mount /dev/sdb5 /mnt/test
rsync -xav / /mnt/test
# make a label for the new partition
e2label /dev/sdb5 /1
Add entries in grub and fstab pointing to LABEL=/1
If you are actually running systems that should "never" go down, then you should move immediately to clustering.
I run rsync too, but there are problems.
1. In most production environments is the time gap between when the rsync job was run and the failure introduces a bunch of problems, especially if there is money involved.
2. The down time switching to your rsync backup is meaningful. Clustering eliminates all of that downtime.
Is it the case that the organizational will to build it right doesn't exist? I've been in a few situations where clustering was viewed as some kind of voodoo so it was never implemented. Unfortunately they had needlessly complex architectures that didn't fail elegantly.
In our present environment clustering isn't an option, since we are using software that doesn't support clustering, nor do the current licenses allow this.
Basically if the system were to go know another system should be up withinn 15 minutes.
Effectively this means that the secondary system has the exact same configuration. So it would simply be a matter of starting the correct services and make ik look as if the service has never gone down.
In the near future more fail-safe solution will be implemented, but for the time being this is how it needs to be set up.