LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 02-17-2012, 07:20 PM   #1
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Leinster, IE
Distribution: Slackware, NetBSD
Posts: 2,180

Rep: Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763
Software RAID 1 with multiple arrays: mounts not honoured after reboot


I have two disks - dev/sda and /dev/sdb.

I follow the RAID readme and create a number of RAID 1 arrays as follows:

Primary - sda1+sdb1 = /dev/md0 - / (ext4)
Logical - sda5+sdb5 = /dev/md1 - swap
Logical - sda6+sdb6 = /dev/md2 - /tmp (ext2)
Logical - sda7+sdb7 = /dev/md3 - /usr (jfs)
Logical - sda8+sdb8 = /dev/md4 - /var (jfs)
Logical - sda9+sdb9 = /dev/md5 - /home (jfs)
Logical - sda10+sdb10 = /dev/md6 - not mounted - to be used at a later stage for logical volumes

I follow the instructions to the letter but after rebooting only / is mounted.

In addition, the device numbering for the RAID arrays is now /dev/md0, /dev/md122, /dev/md123, /dev/md124, /dev/md125, /dev/md126, and /dev/md127. Needless to say only / is mounted, because /etc/fstab is set up for /tmp on /dev/md2, /usr on /dev/md3, etc.

What am I doing wrong?
 
Old 02-18-2012, 01:32 AM   #2
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Actually the README_RAID.TXT is out of date.

Some important notes on how raid really works with the more recent kernels (like last 3 years)

1. Partition type fd is obsolete don't bother with it and it can cause problems later. Use partition type da instead.
2. You must use the initrd to boot or the raid devices wont be detected, see the section under "Using the generic kernel" for the details.

https://raid.wiki.kernel.org/

EDIT: Your problem may also be related to the use of logical drives, not sure if RAID ever worked with those.

Last edited by wildwizard; 02-18-2012 at 01:38 AM.
 
Old 02-18-2012, 02:36 AM   #3
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,858

Rep: Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225
Plus it would be a lot easier to make one large RAID1 partition and use it as a physical volume for LVM.

Well, you could make a small RAID1 partition for /boot and put the rest of the drive on another larger RAID1 partition. For example...
Code:
bash-4.1$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md1 : active raid1 sdd3[0] sde3[2]
      142128606 blocks super 1.2 [2/2] [UU]
      
md2 : active raid1 sdd1[0] sde1[1]
      248964 blocks super 1.2 [2/2] [UU]
      
md3 : active raid1 sdc2[0] sda2[1]
      624880192 blocks [2/2] [UU]
      
unused devices: <none>
bash-4.1$ sudo /sbin/pvs
  PV         VG      Fmt  Attr PSize   PFree  
  /dev/md1   mdgroup lvm2 a-   135.53g  79.50g
  /dev/md3   mdgroup lvm2 a-   595.91g 403.66g
bash-4.1$
(sdc and sda also have a small partition set aside to be used as a RAID1 /boot partition in case sdd and sde fail.)

/usr/share/mkinitrd/mkinitrd_command_generator.sh will be your undocumented friend if you go this route.
 
Old 02-19-2012, 08:57 AM   #4
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Leinster, IE
Distribution: Slackware, NetBSD
Posts: 2,180

Original Poster
Rep: Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763
Quote:
Originally Posted by wildwizard View Post
Actually the README_RAID.TXT is out of date.

Some important notes on how raid really works with the more recent kernels (like last 3 years)

1. Partition type fd is obsolete don't bother with it and it can cause problems later. Use partition type da instead.
2. You must use the initrd to boot or the raid devices wont be detected, see the section under "Using the generic kernel" for the details.

https://raid.wiki.kernel.org/

EDIT: Your problem may also be related to the use of logical drives, not sure if RAID ever worked with those.
Thanks for the updated info and the link. I think the README should mention
that partition type fd is deprecated.

I have a feeling logical drives are indeed the problem. For the moment I have
done what Richard Cranium suggested and gone with a single RAID partition and
multiple Logical Volumes. Perhaps this is the best way after all, although in
the future I would like to try "disposable" mounts like /usr on RAID0 and
/home on RAID1.
 
Old 02-19-2012, 09:01 AM   #5
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Leinster, IE
Distribution: Slackware, NetBSD
Posts: 2,180

Original Poster
Rep: Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763Reputation: 1763
Quote:
Originally Posted by Richard Cranium View Post
Plus it would be a lot easier to make one large RAID1 partition and use it as a physical volume for LVM.
Thanks. I have gone down this route and it's working fine. There seems to be no performance penalty attached to LVM on top of software RAID.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm software raid problem after every reboot agurkas Linux - General 3 08-04-2010 09:53 AM
multiple md raid arrays edgjerp Linux - Software 0 10-13-2007 07:28 AM
software raid - device rearrangement after reboot when drives are disconnected rtspitz Linux - Hardware 5 07-08-2007 08:00 PM
reiserfs slight occasional corruption on software RAID-5 arrays Cairan Linux - Software 3 07-11-2006 04:11 PM
Software raid 5 always missing 1 disk after reboot birkinshawc Linux - Hardware 0 05-20-2004 10:57 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 10:09 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration