LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Backing up a "live system" using tar ( followed by regular incremental backups) ? (https://www.linuxquestions.org/questions/linux-newbie-8/backing-up-a-live-system-using-tar-followed-by-regular-incremental-backups-784062/)

uncle-c 01-22-2010 12:30 PM

Backing up a "live system" using tar ( followed by regular incremental backups) ?
 
Hello all.
I was just wondering if this was possible ? Obviously one would have to exclude certain folders / directories but would the backup be possible if the system is up and running in its native "live" state ? Which directories could be excluded ? Does swap need to be turned off ?
Ideally I would like to make incremental backups on a separate partition of the same hard drive. I will endeavour to backup the MBR/ Partition table using dd.

Regards
C

GrapefruiTgirl 01-22-2010 12:37 PM

It is certainly possible, yes; some of this is what I do:

--backup from a live, running state, using a script started by cron
--ignore /proc and /sys and /dev as these can cause problems.
--I don't recurse into any mounted media that might be mounted (dvd, cd, or other partitions/drives besides / and /home)
--swap can stay mounted; it makes no difference that I'm aware, and I don't see why it should.

I don't backup incrementally, and presently I don't bother tarring either; instead, I do a backup per day, keeping 7 days of backups which get rotated each backup (oldest gets wiped, newest gets created).

Also, I don't do anything as far as the MBR or partition table, so if there's anything to be done there, someone else will have to comment :)

Sasha

uncle-c 01-22-2010 02:22 PM

Thanks GGirl,
A quick question. If you do not tar do you rsync or just cp using the -p argument ? Or perhaps neither ?!

C

GrapefruiTgirl 01-22-2010 02:46 PM

I used to use `cp` until I took the relatively small amount of time to learn how to use `rsync`, and now I use rsync. It's perfect for the job IMHO :) and requires a little less command-line-argument-hackery than `cp` did.

syg00 01-22-2010 04:35 PM

A backup of a running system can never be considered valid, unless you can quiesce (and flush) all sub-systems that issue (write) I/O - for the entire duration of the backup.
Incrementals require valid timestamps on the disk copy. No good if the updated data still resides (only) in memory.

Sometimes you mightn't care - most of the time you won't even know. Bad, very bad.

choogendyk 01-22-2010 06:02 PM

Technically, and correctly, you should take a system down to single user to get a reliable backup. However, in reality, many many systems have to be up and running 24/7, and backups just take too long to have a system down. So, . . . sysadmins have lots of practical ways of improving their odds.

Snapshots. Vary from OS to OS. I use fssnap in Solaris 9 and ZFS snapshots in Solaris 10. There are options for Linux as well. This reduces the window of time you are vulnerable to a file system changing underneath you while you are running backups. However, it is still vulnerable to issues of things being in memory and not flushed, or to databases being in flux, etc.

So, identify whatever systems you have that might be an issue -- say, for example, MySQL. Typically MySQL will have its own tools for backup. However, you can get away with just having the MyISAM structure backed up by another tool, like tar or ufsdump, if you flush the database, lock tables, pull your snapshot, and then unlock the tables. Then you can backup the snapshot, and you only had the database locked up for maybe a few seconds. Of course, that's only one example. You have to examine your system and see what the issues are.

I've only been bitten once or twice over more than 10 years with not having a viable backup in a disaster situation. I had a boot drive die, and the recovered system was not bootable (something had come out inconsistent because of backing up a live system, I presume). I booted off cdrom, did an upgrade install over the recovered system, and that patched things up while still keeping my preferences and configurations. I might have also tried a slightly older full backup and incrementals to get me up to date, but I decided on the spur of the moment that the upgrade would get me running more quickly. Knowing a variety of options helps in a crisis.

jschiwal 01-22-2010 07:13 PM

Look in Section 5.2 of the tar info manual, which explains the -g option to produce either incremental or differential backups. The difference is whether you reuse the .snar file from the full backup or use a working copy of the full backups .snar file.

Files created or modified after tar was started will not be backed up.
One strategy could be to create a full backup (level 0 backup) manually in single user mode, and incremental or differential backups in a cron job.

I would also run "fdisk -l" and "fdisk -lu" and print out the results. The latter uses 512 byte sectors which eliminates rounding error. This would allow you to use losetup with an offset to attach a loop device to the starting point of a partition. In the event that the MBR was damaged and not repairable, but you could access other parts of the disk, you could still mount the partition and recover the files.


All times are GMT -5. The time now is 08:37 AM.