[SOLVED] Best way to do image based backups on linux
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
If I really have to make a complete image I use dd in conjunction with a compressor, like bzip2:
1. Write zeroes to the unused parts of a disk, using dd to write zeroes to a file until the disk is full.
2. Use dd to read the partition, pipe the output to the compressor and redirect it to a file on the backup medium.
Since any distro comes with dd and i can choose the compressor this is the most distro agnostic approach, I would think.
Of course there are other approaches with more functionality, like Clonezilla, but I prefer the simple things.
Never been one for image backups. I prefer to take a set of tar archives, one per filesystem, that I can use to restore the complete system state, or just individual files/filesystems as necessary. Combine those with the saved output of "sfdisk -d", "vgcfgbackup", and "cryptsetup luksHeaderBackup" and it should be possible to restore everything to its current state from scratch, using only the tools on my slackware installation media.
Distribution: Fedora (typically latest release or development release)
I have tried with clonezilla. But, now-a-days, I do not image based backups anymore. I just rsync to my external HDs. perhaps, it helps that I run no applications that I cannot reinstall. I do backup configuration files as well.
You could try www.mondorescue.org; does a warm backup as opposed to Clonezilla which I believe requires a shutdown as it does a cold backup.
Depends on your requirements.
Databases need special handling if not done cold.
Many (most) prod systems can not be shutdown, so a combination of Having the OS install disks, then something like eg Mondo or NetBackup https://en.wikipedia.org/wiki/NetBackup, with DB hotbackup techniques is usual.
Of course if you have hot swap mirror disks, that works too
Im curious about the DD along with bzip2, how would one go about restoring?
The reason why im asking, is that in a complete disaster scenario, I would like to restore the system "at a point in time" and let it resotre an image, than take the time to copy the files, because in a disaster, there are probably other things I need to do while its restoring.
Ill also take a look at mondo rescue, seems like a flexible solution as well.
4. Unmount the backup medium (see step 6 above) and you are done.
You may want to also make a copy of the output of
so that you can have information on the exact partition size if needed.
Note that I use a blocksize of 16MB for the dd operations, because I found that blocksizes in the range between 8MB and 32MB (depending on your hardware) give me in general a better performance than the default value.
Zeroing out the free space first allows for a smaller compressed backup. Also consider using ddrescue which might be faster. It will retry if it reaches some bad blocks. It also varies the block size depending on the health of the drive. Bad blocks are zeroed out in the image. Using dd without options will simply fail.
Do you mean you have made an image of the running system? This is not reliable, since the file-systems can change during the backup, which will leave you with an inconsistent backup. Backups that are possibly inconsistent are worthless.
Originally Posted by jschiwal
Also consider using ddrescue which might be faster. It will retry if it reaches some bad blocks. It also varies the block size depending on the health of the drive. Bad blocks are zeroed out in the image. Using dd without options will simply fail.
I disagree. Of course you can use ddrescue, but if you have to use it because of a faulty disk it is already to late for a backup, the data is already potentially damaged. If there are already bad blocks on the disk you are not making a reliable backup anymore, you are creating an image for rescue purposes.
Distribution: Ubuntu 12.04 with KXStudio, MATE & Compiz
Thanks: I was wondering, why the zeros?
Originally Posted by TiMMay333
I would like to restore the system "at a point in time" and let it resotre an image, than take the time to copy the files, because in a disaster, there are probably other things I need to do while its restoring.
That's not really a valid comparison. If you have a tar backup, the backup itself is one file, and can be restored with just one command. As it works at the file level, not the block device level, it gives you enormous flexibility to restore all or part, and to do so to any location (assuming that the backup was made using ./this/that/the-other relative paths) on any device or any point within the directory tree[s].
dd is a lower-level technical tool for when you need a bit-identical, not data-identical, copy. Which I suppose is what "image" means <Blush>. dd will also do stuff like read or write a given number of bits, convert block sizes, etc.
Tar is your swiss-army knife: dd is your leatherman multi-tool.
The problem with running dd on a live file system is that things can change while the backup is running e.g. changes to log files. Also you will backup the the contents of /sys and /proc which are automatically generated at boot.
Personally, I like dd images of my Windows machines. If you restore from the image after a virus invasion it is comfortable to know that the sucker has been completely eliminated from the disk.