Repair a broken RAID 1 partition
Simple question: How do you repair a RAID 1? Its a two disk software RAID. The partition on disk one is broken, the one one disk two is okay, and I want to copy two to one. How?
How to Restore a RAID Configuration After a Hard Drive Failure
I have only had to deal with this twice and both times it was a computer that belonged to someone else. I always do a backup of the system first while I can still get to the data.
The first time this happened, I just replaced the bad drive and the RAID manager saw the new drive and automatically duplicated the working drive. I was able to find an exact drive replacement.
The second time I did not have an exact drive, so I had to just replace both drives, which means I started over. I had to create a new RAID configuration and then performed a restore. Everything was then back to normal.
I would ask if you have a bad hard drive, can you find an exact replacement or must you replace both hard drives?
I dont have any reason to think theres physical damage. I have no problems with the RAID 0 partitions on the rest of the disk at least. The RAID 1 is my /boot partition.
What do you mean by "performed a restore"? I think thats the bit I want to do ... from the good disk, to the bad one.
There is some information about repairing RAIDS on this page: http://www.linux.com/howtos/Software...-HOWTO-4.shtml
I did this yesterday.
Background: RHEL4/U4 on a SunFire X2100. I was burning in a few (Seagate SATA) disks for eventual backup. I ran Sun's utilities in a stress mode for a number of hours to be sure there was no infant mortality. The SunFires have a nice mechanism for quickly removing and inserting disks on a carrier -- 4 screws to hold the disk in the carrier.
The procedure I followed was to place a new disk in a slot, and reboot. The version of mdadm that is in RHEL is 1.6.0. The status from mdadm noted the lack of a valid partition table on the new disk and mdadm "removed" the partitions from the RAID1 arrays. This box has a simple partition scheme -- two partitions -- /boot for booting, and everything else in an LVM in one other partition.
I rebooted with gparted, and wrote a "label" on the new disk -- the gparted term for empty partition table. I then booted back into RHEL. This time mdadm did not complain about the lack of a partition, but it didn't do anything about syncing the new disk.
I found a reference to sfdisk on a webpage, and wrote a little script to clone the partition table from the good disk to the new disk:
If you use the script, you will probably want to remove the last step of the pipeline before a final run so that you can verify that things look OK. The sfdisk utility is careful -- I had "sbd" in the first run instead of the correct "sdb", and it said it could not find the disk.
Once the sync was done, I off-powered, removed the "good" disk, and re-booted, simulating a broken disk in the RAID1 arrays. It booted flawlessly. I was grateful (not to mention impressed) that mdadm was able to re-construct the mbr contents (RHEL uses grub). I off-powered, inserted the good disk, rebooted, and it ran without incident. Using
In at least one web page, there was a section on how to use grub to fix up the disk for booting, but the version of mdadm that I used seemed to handle all that.
I assume your situation varies somewhat from mine -- you may be using IDE disks, you may have more partitions, etc., so you'll need to adapt to your circumstances.
Best wishes ... cheers, makyo
|All times are GMT -5. The time now is 11:10 AM.|