LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
Search this Thread
Old 07-28-2008, 12:01 AM   #1
checkmate3001
Member
 
Registered: Sep 2007
Location: Folsom, California
Distribution: Debian 4.0 (Etch), Debian 5.0 (Lenny), Ubuntu 8.04
Posts: 301

Rep: Reputation: 32
install grub with software raid (mdadm) - safetly boot from alternate drive(s)


Hello all,

I have successfully setup software raid1 with hot spare (total 3 drives) on my system and I want to test its ability to boot from one of the alternate drives if the primary dies.

/proc/mdstat
Code:
Personalities : [raid1]
md2 : active raid1 sdc6[2](S) sda6[0] sdb6[1]
      146480512 blocks [2/2] [UU]

md1 : active raid1 sdc5[2](S) sda5[0] sdb5[1]
      4883648 blocks [2/2] [UU]

md0 : active raid1 sdc1[2](S) sda1[0] sdb1[1]
      3903680 blocks [2/2] [UU]

md3 : active raid1 sdc7[2](S) sda7[0] sdb7[1]
      1020032 blocks [2/2] [UU]
fdisk -l
Code:
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         486     3903763+  fd  Linux raid autodetect
/dev/sda2             487       19457   152384557+   5  Extended
/dev/sda5             487        1094     4883728+  fd  Linux raid autodetect
/dev/sda6            1095       19330   146480638+  fd  Linux raid autodetect
/dev/sda7           19331       19457     1020096   fd  Linux raid autodetect

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1         486     3903763+  fd  Linux raid autodetect
/dev/sdb2             487       19457   152384557+   5  Extended
/dev/sdb5             487        1094     4883728+  fd  Linux raid autodetect
/dev/sdb6            1095       19330   146480638+  fd  Linux raid autodetect
/dev/sdb7           19331       19457     1020096   fd  Linux raid autodetect

Disk /dev/sdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1         486     3903763+  fd  Linux raid autodetect
/dev/sdc2             487       19457   152384557+   5  Extended
/dev/sdc5             487        1094     4883728+  fd  Linux raid autodetect
/dev/sdc6            1095       19330   146480638+  fd  Linux raid autodetect
/dev/sdc7           19331       19457     1020096   fd  Linux raid autodetect
End of grub\menu.lst
Code:
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.26
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.26 root=/dev/md0 ro
initrd          /boot/initrd.img-2.6.26
savedefault

title           Debian GNU/Linux, kernel 2.6.26 (single-user mode)
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.26 root=/dev/md0 ro single
initrd          /boot/initrd.img-2.6.26
savedefault

title           Debian GNU/Linux, kernel 2.6.18-6-amd64
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.18-6-amd64 root=/dev/md0 ro
initrd          /boot/initrd.img-2.6.18-6-amd64
savedefault

title           Debian GNU/Linux, kernel 2.6.18-6-amd64 (single-user mode)
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.18-6-amd64 root=/dev/md0 ro single
initrd          /boot/initrd.img-2.6.18-6-amd64
savedefault

title           Debian GNU/Linux, kernel memtest86+
root            (hd0,0)
kernel          /boot/memtest86+.bin

### END DEBIAN AUTOMAGIC KERNELS LIST
Should I be able to remove sda1 from the array and boot just fine? I'd rather know for sure before I give it a go.

Thank you.

Last edited by checkmate3001; 08-03-2008 at 04:29 AM. Reason: changed title for better hits in search
 
Old 07-28-2008, 01:29 AM   #2
mritch
Member
 
Registered: Nov 2003
Location: austria
Distribution: debian
Posts: 667

Rep: Reputation: 30
backup. be sure to have the grub bootloader on all drives MBRs (important w/ sw-raid; grub-install hdX). now just remove one drive.. . there's an alternative way by marking a drive faulty - see man mdadm for more info.

sl, ritch
 
Old 07-28-2008, 01:59 AM   #3
storkus
Member
 
Registered: Jun 2008
Posts: 310

Rep: Reputation: 45
I just have to ask because it seems so obvious to me, but if you have 3 drives why aren't you running RAID 5 to get the best of both worlds?

Mike
 
Old 07-29-2008, 12:32 AM   #4
checkmate3001
Member
 
Registered: Sep 2007
Location: Folsom, California
Distribution: Debian 4.0 (Etch), Debian 5.0 (Lenny), Ubuntu 8.04
Posts: 301

Original Poster
Rep: Reputation: 32
Why not RAID 5

I choose to use RAID1 over RAID5 primarily for reliability over speed.

I bought all 3 drives together and if for some reason 1 dies, I would naturally start to wonder about the other 2. Also, in the rare chance that 2 drives have faults I'm still ok.

I've also read about it and it seems that using RAID5 on a 3 drive setup doesn't really offer a huge amount of performance. I would rather have RAID5 with 5 or more drives.

Ideally I would like to have RAID6.

Correct me if I'm wrong.
 
Old 07-29-2008, 12:35 AM   #5
checkmate3001
Member
 
Registered: Sep 2007
Location: Folsom, California
Distribution: Debian 4.0 (Etch), Debian 5.0 (Lenny), Ubuntu 8.04
Posts: 301

Original Poster
Rep: Reputation: 32
Quote:
Originally Posted by mritch View Post
backup. be sure to have the grub bootloader on all drives MBRs (important w/ sw-raid; grub-install hdX). now just remove one drive.. . there's an alternative way by marking a drive faulty - see man mdadm for more info.

sl, ritch

Thank you. I will try that out sometime this week, when I get a chance.
 
Old 08-02-2008, 03:20 PM   #6
checkmate3001
Member
 
Registered: Sep 2007
Location: Folsom, California
Distribution: Debian 4.0 (Etch), Debian 5.0 (Lenny), Ubuntu 8.04
Posts: 301

Original Poster
Rep: Reputation: 32
Check if grub is installed on hard drives

OK - before I attempt to install grub on my alternate hard drives I want to check to see if it is already installed. I also want to know (after I try to install it) whether or not it is installed.

Is there a way to verify that grub is installed on my alternate hard drives?
 
Old 08-03-2008, 04:46 AM   #7
checkmate3001
Member
 
Registered: Sep 2007
Location: Folsom, California
Distribution: Debian 4.0 (Etch), Debian 5.0 (Lenny), Ubuntu 8.04
Posts: 301

Original Poster
Rep: Reputation: 32
install grub mdadm raid1 with hotspare

Well, I didn't find a (good) way to verify if grub was installed on my alternate drives. The only way I could really find out is unplug my primary drive and boot the system. I was left with a blinking cursor. So I took that as a 'NO'.

After a few searches and fiddling around I found out how to install grub on the alternate drives.

Some of this may be debian specific so double check before using yourself.

I ran:
Code:
grub-install /dev/sdb1
Searching for GRUB installation directory ... found: /boot/grub
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.

(fd0)   /dev/fd0
(hd0)   /dev/sda
(hd1)   /dev/sdb
(hd2)   /dev/sdc

grub-install /dev/sdc1
Searching for GRUB installation directory ... found: /boot/grub
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.

(fd0)   /dev/fd0
(hd0)   /dev/sda
(hd1)   /dev/sdb
(hd2)   /dev/sdc
Note: If you get an error like "grub: no corresponding bios drive" try running:
Code:
grub-install --recheck /dev/sdx
Where X is your primary drive (ie: sda1)
It checks for all the available drives to boot from and puts them in the device.map file. I choose to just re-install on the primary drive, because it doesn't work if you don't put a drive location... It should IMO - it just updates the device.map. I suppose you can manually alter this file also... but I would read the man page to find out for sure.

Then I ran:
Code:
#grub

    GNU GRUB  version 0.97  (640K lower / 3072K upper memory)

       [ Minimal BASH-like line editing is supported.   For
         the   first   word,  TAB  lists  possible  command
         completions.  Anywhere else TAB lists the possible
         completions of a device/filename. ]

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.
succeeded
 Running "install /boot/grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
Done.

grub> root (hd2,0)
 Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd2)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd2)"...  15 sectors are embedded.
succeeded
 Running "install /boot/grub/stage1 (hd2) (hd2)1+15 p (hd2,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
Done.
Finally I added a couple boot entries to my menu.lst for fallback and to manually select if I need to.
my menu.lst (this is VERY debian specific):
Code:
== cut to save space ==
## fallback num
# Set the fallback entry. This allows you to boot from an alternate drive
# if the first drive of the array fails.
fallback        5

== cut to save space ==

### END DEBIAN AUTOMAGIC KERNELS LIST

## Fallback Entry (fallback 5)

title           Debian GNU/Linux, kernel 2.6.26 (sdb)
root            (hd1,0)
kernel          /boot/vmlinuz-2.6.26 root=/dev/md0 ro
initrd          /boot/initrd.img-2.6.26

## Fallback Entry (fallback 6)
title           Debian GNU/Linux, kernel 2.6.26 (sdc)
root            (hd2,0)
kernel          /boot/vmlinuz-2.6.26 root=/dev/md0 ro
initrd          /boot/initrd.img-2.6.26

I unplugged my primary drive (after shutdown of course) and booted it up. Lo and behold it came alive!
I haven't tried to boot off of one drive, but I'm sure it would work. I'm going to make a boot floppy as an additional backup just in case.

Hope this helps someone.

Last edited by checkmate3001; 08-03-2008 at 04:50 AM. Reason: fix a code section and add a title
 
Old 08-06-2008, 01:35 PM   #8
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Slackware 14.1 (multilib) with kernel 3.15.5
Posts: 1,534
Blog Entries: 12

Rep: Reputation: 171Reputation: 171
nice post

Nice post on raid 1 and grub; I'm sure it'll be helpful for a lot of people who ask the same grub questions. Not at all Debian specific though; other than the word "Debian" it ought to work with any distro. Certainly ought to work with minimal renaming of files for slackware, pclinux, ubuntu as far as I know. Still, probably a good idea to put a warning/disclaimer on it.
 
  


Reply

Tags
boot, grub, mdadm, raid1


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to setup a software RAID 5 using IDE drives? kitek Linux - Newbie 2 03-17-2007 07:30 PM
Puzzle: How can employer test C++ programming ability? ArthurHuang Programming 13 05-22-2006 02:58 AM
RAID/LVM setup with existing drives juu801 Linux - General 3 07-06-2005 12:35 AM
Upgrading hard drives on Software raid 1 boot drives. linuxboy123 Linux - General 0 12-11-2003 03:28 PM
Software Raid Setup Ok - Reboot fails on disk failure test ikke Linux - General 2 05-11-2003 06:42 PM


All times are GMT -5. The time now is 03:11 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration