LinuxQuestions.org
Did you know LQ has a Linux Hardware Compatibility List?
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 12-13-2007, 12:47 PM   #1
hazmatt20
Member
 
Registered: Jan 2006
Distribution: FC5, Ubuntu
Posts: 126

Rep: Reputation: 15
Raid5 array works but won't start on boot


So, I want to point out that I just installed Debian on the system, and I have all of my data stored on other disks that are not going into the array, so there is no sense of urgency here.

To keep it short, I installed Debian on /dev/hda and created a raid5 array on /dev/sd[a-e]. No problems. Everything even fine on reboot. However, I actually have 6 drives and only used 5 so I could test growing the array for future needs. When I add the 6th drive, the array does not come up automatically. /proc/mdstat has

Quote:
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sda[0](S)
488386496 blocks
If I take the 6th drive back out, the array starts on reboot like it should. With the 6th drive in, I have to do one of two things for it to start.

Quote:
mdadm --stop /dev/md0
mdadm -A /dev/md0
or

Quote:
/etc/init.d/mdadm-raid restart
I tried bringing it up and running

Quote:
echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
but it did not help. I also tried replacing "partitions" with "/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1" as well as manually entering the drives in the array in mdadm.conf. I'm about to go ahead and add the last drive and then see if it works correctly because any drives that get added in the future should only be for the array, but it still concerns me. What if I just wanted to add another separate drive for, say, /var that wouldn't be part of the array? I would probably have to add "mdadm --stop /dev/md0;mdadm -A /dev/md0" to rc.local for the raid array to work. Any suggestions?

Last edited by hazmatt20; 12-13-2007 at 12:49 PM.
 
Old 12-14-2007, 12:57 AM   #2
complich8
Member
 
Registered: Oct 2007
Distribution: rhel, fedora, gentoo, ubuntu, freebsd
Posts: 104

Rep: Reputation: 15
out of curiosity, anything interesting in /var/log/messages or dmesg output pertaining to mdadm and/or raids?
 
Old 12-14-2007, 02:02 AM   #3
hazmatt20
Member
 
Registered: Jan 2006
Distribution: FC5, Ubuntu
Posts: 126

Original Poster
Rep: Reputation: 15
In /var/log/syslog, after the drives load it has

Code:
raid5: automatically using best checksumming function: pIII_sse
pIII_sse : 6327.000 MB/sec
raid5: using function: pIII_sse
md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: bitmap version 4.39
raid6: int32x1
raid6: int32x2
...
raid6: using algorithm sse2x2
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
md: md0 stopped
md: bind<sda>
raid5: not enough operational devices for md0 (3/3 failed)
RAID5 conf printout:
 --- rd:3 wd:0 fd:3
raid5: failed to run raid set md0
md: pers->run() failed ...
Attempting manual resume
That's about it as far as logs are concerned. After I added the 6th drive and grew the array, it did the same thing. While I probably can just start over and create the array from scratch with all 6 drives, I don't want to break the array if I want to add another drive later.
 
Old 12-14-2007, 02:28 AM   #4
hazmatt20
Member
 
Registered: Jan 2006
Distribution: FC5, Ubuntu
Posts: 126

Original Poster
Rep: Reputation: 15
So, I think it all has to do with one of the drives (the one I added later, in fact). When I moved it to a different SATA port, the array came up with all drives but it. I think it may have been causing a problem I was having earlier as well where an array of 3 drives always came up degraded on boot. I could re-add the drive and rebuild, but it always came up degraded. Now when it comes up, it sometimes comes up as sdc and sometimes as sde, but whichever letter it comes up as, it doesn't have a 1st partition listing in /dev (e.g. if it's sdc, /dev/sdc1 doesn't exist even though fdisk shows it). If I use fdisk to delete and recreate the partition, then it shows up in /dev and I can add the drive to the array, but when I reboot, it doesn't. Do you think the drive is shot? I'm going to see if I can find some tools from Seagate in the mean time.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Removed RAID5 array and now no boot Child of Wonder Ubuntu 2 07-27-2007 10:58 PM
Want to move LVM installation to RAID5 array - how do I get it to boot? nethbar Linux - Software 0 05-29-2007 11:49 AM
RAID5 Array Recovery after OS upgrade hazmatt20 Linux - Server 25 04-19-2007 09:41 AM
Secure Deletion with RAID5 array neilschelly Linux - Security 2 12-05-2004 07:25 PM
Resizing ext3 partitions on RAID5 array greenhornet Linux - General 2 04-09-2002 09:22 AM


All times are GMT -5. The time now is 04:51 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration