Backup/restore system on RAID1 causes raid device to disappear
I want to backup the whole system (Suse 10.3) running on RAID1 and restore it. When I restore and try to boot /dev/md0 is no longer present.
I started with a clean install of Suse 10.3 to /dev/md0 (RAID1) on sdb3 and sdc3. Works fine.
The backup sequence is--
1. Boot up on Suse installed in another partition (sdb2)
2. Try to mount /dev/md0 and discover md0 doesn't exist.
3. Go Yast and set RAID partitions to md0 (and do not format)
4. /dev/md0 mounts.
5. Copy system on md0 to the external drive.
Test the restore--
1. Boot up Suse sdb2
2. 'rm' everything on /dev/md0
3. Copy backup from the external drive to md0.
3. Reboot and select the Suse with raid
mdadm: no devices found for /dev/md0
With Suse 10.3 installed on sdc2, as well as sdb2, when I boot up the "other" installation, /dev/md0 is not present. Creating it on Yast with no formatting, brings it back (with all the data intact). Reboot the other Suse and /dev/md0 is not present.
It appears that something in the RAID1 partitions is set that identifies the specific system it was setup under. If so, then how can one backup a system installed on RAID1, then restore it so that it will run with RAID1?
I don't know if you are still working on this, and I have only limited experience myself, but I guess the relevant questions I would be asking are:
(1) How did you backup/copy the files (rsync?,cp?, init level 1?)?
(2) what does mdadm.conf say about the file system (has it changed UUID?)
(3) how are you booting and starting the system and mdadm (grub?, lilo?, autodetect?, initrd? etc.)
Another strategy would be to add the external backup drive to your RAID1, let it sync, then remove it from the RAID (for backup). Restore would be booting the system, starting the raid with only the external drive and adding the blanked out originals and letting them sync.
Thanks for the ideas.
I've forgotten the details, but I did something that made it more tractable, in that I can run run Linux in a different drive/partition and access /dev/md0 (which requires)--
mdadm --create --level-raid1 --raid-drives=2 --spare-drives=0 /dev/sdb3 /dev/sdc3
which then sees that these partitions already part of a raid1 array and asks for y/n to proceed. Proceeding it then (re)creates the raid1 and /dev/md0 is then available (and it can then be copied, or tar'ed, to an external drive for backup).
When I reboot, and select the "main" Linux that was installed with /dev/md0 it now locates the md0 without intervention (and I forgot what had to do to make this happen...seems like I something was missing that I had to add...).
Even more recent--I've found that save/restore of the whole system works if tar, with an exclude file that eliminates things such as /dev and /proc, is used.
Well, I figured you had it fixed by now. Congrats
|All times are GMT -5. The time now is 08:00 PM.|