Using mdadm - Failed RAID-5 Array but individual disks check out ok
I had a battery fail on me last month. I think this was the original failure that caused my entire Array to fail. Since I was in Germany for a month, i was not able to see it or fix it until catastrophic failure.
The array is made up of:
Device 0 --- /dev/hde1
Device 1 --- /dev/hdg1
Device 2 --- /dev/hdi1
Device 3 --- /dev/hdk1
I did a scan of the individual devices and they all check out ok. I think /dev/hde1 was the first to fail because when I mdadm --examine /dev/hde1 it says that all the devices are active and in sync. The State is active, checksum is correct, and the Events = .3
When I did a scan of /dev/hdk1 it says that Device 0 is removed. The State is active, checksum is correct, and events = 0.84123.
The other devices, /dev/hdg1 and /dev/hdi1, both show Device 0 is removed and Device 3 is faulty removed. The States are clean, checksums are correct, and events=.543690.
I am not sure what "events" means on this but I was thinking it has something to do with where things were before that drive failed? So I just included it here.
The question that I have is:
Is there a way to roll the array back to when things were still good and just continue on from there? Can I roll back to before Device 3 went down and get the array up and running with Device 1, Device 2, and Device 3? I would even be willing to go back to when Device 0 went off line.
The disks are essentially fine. As the BIOS battery failed, it seems that the hard drives went off line one by one. New battery and the hard drives show up good now.
P.S. Also, I hope to learn more from you guys about mdadm. Any deep tech information you can point me to will be greatly appreciated. I need more than just the man pages. I have a degree in computer science but not much time to keep up with it all. So readers digest version is REALLY appreciated.
|All times are GMT -5. The time now is 11:44 PM.|