Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
01-22-2010, 12:30 PM
|
#1
|
Member
Registered: Oct 2006
Location: The Ether
Distribution: Ubuntu 16.04.7 LTS, Kali, MX Linux with i3WM
Posts: 299
Rep:
|
Backing up a "live system" using tar ( followed by regular incremental backups) ?
Hello all.
I was just wondering if this was possible ? Obviously one would have to exclude certain folders / directories but would the backup be possible if the system is up and running in its native "live" state ? Which directories could be excluded ? Does swap need to be turned off ?
Ideally I would like to make incremental backups on a separate partition of the same hard drive. I will endeavour to backup the MBR/ Partition table using dd.
Regards
C
|
|
|
01-22-2010, 12:37 PM
|
#2
|
LQ Guru
Registered: Dec 2006
Location: underground
Distribution: Slackware64
Posts: 7,594
|
It is certainly possible, yes; some of this is what I do:
--backup from a live, running state, using a script started by cron
--ignore /proc and /sys and /dev as these can cause problems.
--I don't recurse into any mounted media that might be mounted (dvd, cd, or other partitions/drives besides / and /home)
--swap can stay mounted; it makes no difference that I'm aware, and I don't see why it should.
I don't backup incrementally, and presently I don't bother tarring either; instead, I do a backup per day, keeping 7 days of backups which get rotated each backup (oldest gets wiped, newest gets created).
Also, I don't do anything as far as the MBR or partition table, so if there's anything to be done there, someone else will have to comment
Sasha
|
|
|
01-22-2010, 02:22 PM
|
#3
|
Member
Registered: Oct 2006
Location: The Ether
Distribution: Ubuntu 16.04.7 LTS, Kali, MX Linux with i3WM
Posts: 299
Original Poster
Rep:
|
Thanks GGirl,
A quick question. If you do not tar do you rsync or just cp using the -p argument ? Or perhaps neither ?!
C
Last edited by uncle-c; 01-22-2010 at 02:23 PM.
|
|
|
01-22-2010, 02:46 PM
|
#4
|
LQ Guru
Registered: Dec 2006
Location: underground
Distribution: Slackware64
Posts: 7,594
|
I used to use `cp` until I took the relatively small amount of time to learn how to use `rsync`, and now I use rsync. It's perfect for the job IMHO  and requires a little less command-line-argument-hackery than `cp` did.
|
|
|
01-22-2010, 04:35 PM
|
#5
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,385
|
A backup of a running system can never be considered valid, unless you can quiesce (and flush) all sub-systems that issue (write) I/O - for the entire duration of the backup.
Incrementals require valid timestamps on the disk copy. No good if the updated data still resides (only) in memory.
Sometimes you mightn't care - most of the time you won't even know. Bad, very bad.
|
|
|
01-22-2010, 06:02 PM
|
#6
|
Senior Member
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep: 
|
Technically, and correctly, you should take a system down to single user to get a reliable backup. However, in reality, many many systems have to be up and running 24/7, and backups just take too long to have a system down. So, . . . sysadmins have lots of practical ways of improving their odds.
Snapshots. Vary from OS to OS. I use fssnap in Solaris 9 and ZFS snapshots in Solaris 10. There are options for Linux as well. This reduces the window of time you are vulnerable to a file system changing underneath you while you are running backups. However, it is still vulnerable to issues of things being in memory and not flushed, or to databases being in flux, etc.
So, identify whatever systems you have that might be an issue -- say, for example, MySQL. Typically MySQL will have its own tools for backup. However, you can get away with just having the MyISAM structure backed up by another tool, like tar or ufsdump, if you flush the database, lock tables, pull your snapshot, and then unlock the tables. Then you can backup the snapshot, and you only had the database locked up for maybe a few seconds. Of course, that's only one example. You have to examine your system and see what the issues are.
I've only been bitten once or twice over more than 10 years with not having a viable backup in a disaster situation. I had a boot drive die, and the recovered system was not bootable (something had come out inconsistent because of backing up a live system, I presume). I booted off cdrom, did an upgrade install over the recovered system, and that patched things up while still keeping my preferences and configurations. I might have also tried a slightly older full backup and incrementals to get me up to date, but I decided on the spur of the moment that the upgrade would get me running more quickly. Knowing a variety of options helps in a crisis.
|
|
|
01-22-2010, 07:13 PM
|
#7
|
LQ Guru
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733
|
Look in Section 5.2 of the tar info manual, which explains the -g option to produce either incremental or differential backups. The difference is whether you reuse the .snar file from the full backup or use a working copy of the full backups .snar file.
Files created or modified after tar was started will not be backed up.
One strategy could be to create a full backup (level 0 backup) manually in single user mode, and incremental or differential backups in a cron job.
I would also run "fdisk -l" and "fdisk -lu" and print out the results. The latter uses 512 byte sectors which eliminates rounding error. This would allow you to use losetup with an offset to attach a loop device to the starting point of a partition. In the event that the MBR was damaged and not repairable, but you could access other parts of the disk, you could still mount the partition and recover the files.
|
|
|
All times are GMT -5. The time now is 04:35 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|