RAID 5 (soft) + os drive = no boot
I'm having some issues booting when my RAID 5 array is attached to my "server".
Here's my setup:
mobo / cpu: AMD 2000+ , GA-VAXP
- 4 x 250GB SATA (RAID 5) connected to a Promise 300tx2 SATA controller.
this was created in a previous debian setup. I'm trying to migrate this over to a new debian server.
- 1 200 GB (no raid). fresh install
I've installed debian , openSUSE , Ubuntu to the 200GB. All OS's seem to halt at GRUB. Right after "savedefault" ie. the last line in the grub.conf ? When I remove the 4 drives from the controller debian boots OK. "Safemode" in the grub.conf fails in various distro's as well.
I'm fairly certian I never put an mbr on the md0 (raid5), as I always used a separate drive for booting and then used the raid 5 as a data storage.
the grub.conf was regenerated about 8 times by 5 different OS's and all equally failed to boot the system (from the above mentioned). Is a separate raid 5 problematic?
What other info would I need to post to help diagnose the problem? Would lilo help?
PS. live CD's work, however no software raid is used in liveCD as far as I know? ie Fedora live.
If I understand you correctly, then you are trying to boot from the 200GB separate drive and access the data on the array. You are unable to do either. Is that correct? Are you building the system first and then trying to add the array controller? If so, then the BIOS is probably reshuffling drives so that grub is no longer correct.
First of all, disconnect the raid controller and install your favorite version of linux on the separate drive so that it works and boots. Unless that step is done, there is simply no hope of ever getting your system up. You must be able to boot before you even bother with the array.
After you have a bootable system, then add the controller back in, and go into the BIOS and make sure that it is setup to boot from the separate drive. If at that point it won't boot, then load a livecd, find your boot drive and fix grub so that it will boot with the array attached. You might find "Super Grub Disk" a big help in getting grub fixed.
Once the system will boot with the "raid controller" attached, then it's time to deal with accessing the array. So, is it a "RAID" that was configured by the BIOS or is it a software raid that you configured using the "mdadm" tools? If mdadm, then search on mdadm and find out how to reassemble an existing array and create a proper mdadm.conf file. If it's a BIOS array, then install the dmraid package, run "dmraid -ay", and look in /dev/mapper to find your array.
Post back with more info after reading the above, and we'll try to help you further.
Thanks for the great reply. I do have an Ubuntu system working on the first ide drive, without the 4xSATA drives connected. To clarify:
1> You're correct. The 4x SATA drives are attached to a promise (non-raid) SATA controller, which I am not trying to boot from.
2> (this is where Im stuck)
"After you have a bootable system, then add the controller back in, and go into the BIOS and make sure that it is setup to boot from the separate drive. If at that point it won't boot, then load a livecd, find your boot drive and fix grub so that it will boot with the array attached. You might find "Super Grub Disk" a big help in getting grub fixed."
I've downloaded the super grub cd and read up on grub (as best as I can), but am unable to find the fault in my grub settings. I'm very new to grub.
I have on the primary IDE drive hda (according to disks manager in Ubuntu:
I'm assuming hd0,0 is the first partition of the first drive,
but where did it come from? As hda1 is shown in Ubuntu?
Are they the same?
Here is the menu.1st that does successfully boot into
Ubuntu -without- the promise card + sata drives connected:
## ## End Default Options ##
title Ubuntu, kernel 2.6.15-26-386
kernel /boot/vmlinuz-2.6.15-26-386 root=/dev/hda1 ro quiet splash
title Ubuntu, kernel 2.6.15-26-386 (recovery mode)
kernel /boot/vmlinuz-2.6.15-26-386 root=/dev/hda1 ro single
Again, with the promise card + drives it simply stops at "savedefault" / "boot".
PS. I installed ubuntu without the promise card / drives connected; so grub / the os should have no knowledge of them, correct?
-For the record the promise card is:
-My motherboard (Ga-VAXP) does have raid, but it's turned off.
Considering that this is a new install, is there some reason that you can't just reinstall with the controller plugged in? It would save a lot of trouble. One of the problems is that your ubuntu is 2.6.15. They made some SATA config changes in 2.6.16, and trying to walk you through that and a liveCD that uses 2.6.17 in addition to finding a grub problem and setting up a software array is just asking for big trouble.
Have you tried booting in single user mode to see what messages there are? Your multi-user quiet/splash settings are just a handicap at this point.
By the way, I'm a LOT more familiar with debian than I am with ubuntu.
I've reinstalled with Debian with the sata array installed and the results are the same.
I've taken some photographs to hopefully illustrate what is happening better.
I had noticed that on my motherboard bios it did refer to "boot order: scsi, raid".
But that must refer to the on board RAID anyways. I had switched it to scsi first than
raid. It still didn't alleviate the problem. (ps. RAID is disabled completely on the mobo
anyways, but the grayed out option was showing boot RAID first , so I changed it)
Within Debian setup I opted for one large ex3 partition and swap on the 200GB primary IDE.
I could have left the 4x250GB's unmounted, but I decided to mount them to "media" (see second
After that I opted for "Desktop system" , "file server" and the one option at the very bottom
that eludes me atm.
The grub portion claims that it found no previous installs (probably because I formated the
partition where Ubuntu was previous and I let it do its thing.
However, the end result was the same. A reboot and it couldn't get past grub. It seems
to stop at "savedefault". It's pretty frustrating I must admit.
Are there some flags I should be setting on the 4 RAID sata paritions that perhaps should be
set? Bootable is off on all of them.
thanks for looking at it,
Where the system halts.
The partition portion of Debian setup.
The motherboard bios.
Here is some more info from the "Super grub CD" that I booted into after the fresh install of Debian.
Btw, is software RAID-5 very CPU intensive as I have heard? I'm planning RAID system to my 800mhz box. Is it enough for RAID-5 or do I just have to settle for 0+1? Box is only fileserver, no need for other services.
Your system should be more than enough to provide 100Mbs over a network (12-15MBsec)
as long as you dont use it as a desktop
I have raid-5 on my system
Read-Write performance drops to about a half of a Raid-0 setup on my Athon X2 @1Ghz
(i clock my PC down to keep to power down plus i don't need it all really)
Raid 5 and Single Drive similar HDD model (7200rpm 16MB cache)
Copying from one HDD to raid-5 I notice a jump to 20% core/CPU usage on heavy writes. and transfer rates around 20-35MB/s (1st 100MB 30-40MBs filesize >100MB+ gradually slows to 20MBs
But on a Pentium 3 you will probably really feel the performance hit thanks to slow SDRam
DDR made a huge difference to Athlon over Pentium 3
DDR memory i would say is a prerequisite.
Thanks for info. You run nice system there. Are you using software RAID? As far as I know nforce4 provides hardware RAID.
In my case box is just a file server for my LAN, so 100mb/s is enough. I'm looking for backup, performance isn't essential.
I guess I'll test RAID-5. If it's too much for my CPU I go for pure mirroring.
integrated chipset raid takes us back to the day winmodems
Hardware raid does not use any CPU since it has a dedicated processor usually 500Mhz+ soldered onto the Raid card.
Nforce raid like Linux raid uses the CPU to emulate Raid controller
I've narrowed my search to what you've mentioned (drives being shuffled). After Debian yielding
the same results, I tried fedora live - (test 7 - latest?). Anyway the "install to HD" tool wanted
to install grub to the SATA drives first with the default setup ie. 4x250's in raid 5 + standard layout on the single PATA drive.
In fedora I also noticed that the SATA drives are labeled as
sda, sdb, sdc ,sdd. The PATA drive I'm trying to boot from gets sde (last). Boot
order is adjustable in the installer, but it still isn't working. ie. I moved sde to boot from
1st and install grub to sde.
I wonder if I need to get the data off of the 4 drive array and simply start from scratch some
how? I do have another mobo with 4 SATA ports.
Does the linux kernel assume that sda is probably going to be the boot drive? And since
I'm trying to preserve the data on the 4 drives sda has always been assigned to the first
SATA drive in my array?
Is that the possible reason for the non-booting issue across 5(?) different distros (openSuse, Deb, Ubuntu, Fedora etc)?
the linux kernel doesnt care what the boot drive is.
1. bios tells the computer to look for the OS off the MBR a given drive (first boot device in bios)
2. That drives MBR will have Grub installed and it will say to look for the kernel
The first Hard Disk presented by the BIOS (HDD on the Lowest port - Sata port 1)
3. The Kernel will load and will read the (hd0,0)/etc/fstab
for any drives for be mounted.
This is where mine fell over
root=/dev/sda1 in Grub
if that changed kernel panic occurred
if the /dev/sdb1 in fstab changed then linux would fall into failsafe until you fixed MBR.
i needed to swap the device name /dev/sd
with the Unique UUID code to ensure they could be found even if the drives were rearranged.
|All times are GMT -5. The time now is 11:29 AM.|