-   Linux - Server (
-   -   Raid5 array works but won't start on boot (

hazmatt20 12-13-2007 11:47 AM

Raid5 array works but won't start on boot
So, I want to point out that I just installed Debian on the system, and I have all of my data stored on other disks that are not going into the array, so there is no sense of urgency here.

To keep it short, I installed Debian on /dev/hda and created a raid5 array on /dev/sd[a-e]. No problems. Everything even fine on reboot. However, I actually have 6 drives and only used 5 so I could test growing the array for future needs. When I add the 6th drive, the array does not come up automatically. /proc/mdstat has


Personalities : [raid6] [raid5] [raid4]
md0 : inactive sda[0](S)
488386496 blocks
If I take the 6th drive back out, the array starts on reboot like it should. With the 6th drive in, I have to do one of two things for it to start.


mdadm --stop /dev/md0
mdadm -A /dev/md0


/etc/init.d/mdadm-raid restart
I tried bringing it up and running


echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
but it did not help. I also tried replacing "partitions" with "/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1" as well as manually entering the drives in the array in mdadm.conf. I'm about to go ahead and add the last drive and then see if it works correctly because any drives that get added in the future should only be for the array, but it still concerns me. What if I just wanted to add another separate drive for, say, /var that wouldn't be part of the array? I would probably have to add "mdadm --stop /dev/md0;mdadm -A /dev/md0" to rc.local for the raid array to work. Any suggestions?

complich8 12-13-2007 11:57 PM

out of curiosity, anything interesting in /var/log/messages or dmesg output pertaining to mdadm and/or raids?

hazmatt20 12-14-2007 01:02 AM

In /var/log/syslog, after the drives load it has


raid5: automatically using best checksumming function: pIII_sse
pIII_sse : 6327.000 MB/sec
raid5: using function: pIII_sse
md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: bitmap version 4.39
raid6: int32x1
raid6: int32x2
raid6: using algorithm sse2x2
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
md: md0 stopped
md: bind<sda>
raid5: not enough operational devices for md0 (3/3 failed)
RAID5 conf printout:
 --- rd:3 wd:0 fd:3
raid5: failed to run raid set md0
md: pers->run() failed ...
Attempting manual resume

That's about it as far as logs are concerned. After I added the 6th drive and grew the array, it did the same thing. While I probably can just start over and create the array from scratch with all 6 drives, I don't want to break the array if I want to add another drive later.

hazmatt20 12-14-2007 01:28 AM

So, I think it all has to do with one of the drives (the one I added later, in fact). When I moved it to a different SATA port, the array came up with all drives but it. I think it may have been causing a problem I was having earlier as well where an array of 3 drives always came up degraded on boot. I could re-add the drive and rebuild, but it always came up degraded. Now when it comes up, it sometimes comes up as sdc and sometimes as sde, but whichever letter it comes up as, it doesn't have a 1st partition listing in /dev (e.g. if it's sdc, /dev/sdc1 doesn't exist even though fdisk shows it). If I use fdisk to delete and recreate the partition, then it shows up in /dev and I can add the drive to the array, but when I reboot, it doesn't. Do you think the drive is shot? I'm going to see if I can find some tools from Seagate in the mean time.

All times are GMT -5. The time now is 03:07 PM.