Mdadm: reporting 2 drive failures in RAID5 array
I am having trouble with my raid5 array. It is not a boot drive and is used only for file storage. It is comprised of 9x1tb drives giving me arounf 7.2tb usable space. mdadm is reporting that two of the drives have failed (i highly doubt that, they are only 6 months old). The array has around 50gb free and I do not have a backup of any of the data.
Array (md0) is comprised of sd[a-j]1 except sdb1 (boot drive).
I am running ubuntu server jaunty (upgraded the other day).
the result of 'mdadm -v -A /dev/md0' is http://pastebin.com/m2657adcd
the result of 'mdadm --examine /dev/sdd1' and 'mdadm --examine /dev/sdg1' is http://pastebin.com/mccbb16a
Why do you doubt it? Failures of cheap 1TB drives are not uncommon. And, if the drives were all bought together and come from the same batch, then correlated failures might be expected. People I know are using raid 6 if the data matters, and doing backups if it really matters.
Maybe some guru will post some magic for you, but you may have simply lost your data.
Is it possible to re-sync the array, re-calculating the "bad bit" therefore keeping the rest intact?
Edit: The disks were not all bought at the same time. Array has been grown gradually from 4 drives to 9 drives. Am considering Raid6 if I can get my data back...
I have no personal experience trying to recover from a linux raid failure.
However, a quick google of "linux recovery for failed raid" turns up as the very first link, http://www.linuxjournal.com/article/8874, which looks like it might be useful, or at least informative.
Some of the other links might be useful as well, though some are commercial applications or services.
|All times are GMT -5. The time now is 02:29 AM.|