Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I'm trying to build a simple Raid 5 array out of 4 SATA drives. They dries are , 250GB, 250GB, 500GB and 500GB. I've created partitions on each of them that are the exact same size, roughly 250GB.
/dev/sda1, /dev/sdb1. /dev/sdc1, /dev/sdd1
I created an array on these partiions before, but after a power outage it just died. not sure why...but since there was little of value on there, I just decided to rebuild the set from scratch. I first tried...
So it looks like 2 drives are faulty...but I can't see that they are. I have tested both of the drives and they appear fine. I can mount them and use them as individual drives, but in the raid set it's not working.
I tried zeroing the superblocks on each and I've repartitioned each of the drives. still the same result.
When you indicate testing the drives, do you mean a SMART long test with no failure or just the measures mentioned in your post (mounting and building a filesystem)? These latter may not exercise the drive sufficiently to find an error. I've read some complaints of the Linux raid manager too easily marking a drive as faulty, but your circumstances (power outage, repeated recreate problems) suggest the likelihood of a true hardware issue. Check out the smartmontools package if these tests are not already in play.
If drive tests aren't an issue, you might check with a Linux RAID guru, perhaps via the appropriate mailing list, about the output of your mdadm. I'm not an expert, but with 4 physical devices and the requested RAID array of 4 devices, I don't see how there should ever be a spare (sdd1 in this case) unless this output is an artifact of being in the process of creating/rebuilding on top of devices of which some have pre-existing superblock info (the apparently good ones.)