Linux Raid issues
I'm trying to build a simple Raid 5 array out of 4 SATA drives. They dries are , 250GB, 250GB, 500GB and 500GB. I've created partitions on each of them that are the exact same size, roughly 250GB.
/dev/sda1, /dev/sdb1. /dev/sdc1, /dev/sdd1 I created an array on these partiions before, but after a power outage it just died. not sure why...but since there was little of value on there, I just decided to rebuild the set from scratch. I first tried... mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 The device would start and then begin rebuilding as normal...about 5 minutes later it would show up like this. +++++++++++++++++++++++++++++++++++++++ Version : 00.90.03 Creation Time : Wed Jan 16 21:03:10 2008 Raid Level : raid5 Array Size : 732563712 (698.63 GiB 750.15 GB) Used Dev Size : 244187904 (232.88 GiB 250.05 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Wed Jan 16 21:05:22 2008 State : clean, degraded Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K UUID : 26ab635c:6dcc91db:645e3c10:269b8dbf Events : 0.34 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 17 1 active sync /dev/sdb1 2 0 0 2 removed 3 0 0 3 removed 4 8 49 - spare /dev/sdd1 5 8 1 - faulty spare /dev/sda1 +++++++++++++++++++++++++++++++++++++ So it looks like 2 drives are faulty...but I can't see that they are. I have tested both of the drives and they appear fine. I can mount them and use them as individual drives, but in the raid set it's not working. I tried zeroing the superblocks on each and I've repartitioned each of the drives. still the same result. any help is appreciated. |
thorough drive testing?
When you indicate testing the drives, do you mean a SMART long test with no failure or just the measures mentioned in your post (mounting and building a filesystem)? These latter may not exercise the drive sufficiently to find an error. I've read some complaints of the Linux raid manager too easily marking a drive as faulty, but your circumstances (power outage, repeated recreate problems) suggest the likelihood of a true hardware issue. Check out the smartmontools package if these tests are not already in play.
If drive tests aren't an issue, you might check with a Linux RAID guru, perhaps via the appropriate mailing list, about the output of your mdadm. I'm not an expert, but with 4 physical devices and the requested RAID array of 4 devices, I don't see how there should ever be a spare (sdd1 in this case) unless this output is an artifact of being in the process of creating/rebuilding on top of devices of which some have pre-existing superblock info (the apparently good ones.) |
I'll try that
I ran Bonnie++ just as a cursory test, but I will do some more digging per your suggestion.
|
All times are GMT -5. The time now is 02:19 AM. |