LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 10-09-2012, 02:00 AM   #1
polch
LQ Newbie
 
Registered: Sep 2010
Posts: 22

Rep: Reputation: 0
RAID devices numbering and their designation in fstab.


Hi i've posted in the installation subforum about this problem (Install on raid setup and /dev/md127), but i had no feedback.

My raid devices are numbered from /dev/md127 to /dev/md126 (instead of /dev/md2 to /dev/md3). I understand the reasons of this numbering but i would like to know how to deal with this numbering on slackware :

(1) i have changed the fstab to use /dev/md/slackware:X instead of /dev/mdX
(2) i could change the mdadm.conf
(3) i could create the array with an option to force the numbering (i have tried with -name but it doesn't change anything...)

What solution would you suggest please ?

Regards.

Paul.
 
Old 10-09-2012, 03:29 AM   #2
kikinovak
MLED Founder
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: CentOS, OpenSUSE
Posts: 3,453

Rep: Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154
The root of the Slackware file tree contains an excellent document about setting up RAID:

http://slackware.bokxing-it.nl/mirro...EADME_RAID.TXT

I read through the link you copied in your message. Your setup looks fine, except for one vital point that seems to be the cause of your problem: there's no mention of setting up LILO for actually using your RAID array.

Your mileage may vary, but when I install a Slackware server using software RAID, I'm doing two things after exiting the installer (EXIT option in the installer menu).

Chroot into the newly installed Slackware system:

Code:
# chroot /mnt
The first thing is optional, but since I'm already in my newly installed Slackware, I create an Initrd using mkinitrd.conf. I do this since some exotic servers require additional modules for their disk controllers to be explicitly defined in the MODULES='' line. But what you have to do here is add this:

Code:
RAID="1"
Caution: this line in mkinitrd.conf does not mean you're using RAID level 1. It simply means you do use RAID - regardless if it's RAID 0, 1, 5, whatever.

Then, my lilo.conf looks something like this:

Code:
...
append="nomodeset quiet vt.default_utf8=1 ipv6.disable=1"
boot=/dev/md1
compact
lba32
raid-extra-boot = mbr-only
...
timeout = 30
...
image = /boot/vmlinuz-generic-3.2.29
  initrd = /boot/initrd.gz
	root = /dev/md3
	label = 14.0-64bit
	read-only
Take the changes into account:

Code:
# lilo
Then exit the chrooted environment and reboot:

Code:
# exit
# reboot
Software RAID (level 1 or 5) works nice here on various servers running 13.37 and 14.0.
 
Old 10-09-2012, 03:35 AM   #3
Slackovado
Member
 
Registered: Mar 2005
Location: BC, Canada
Distribution: Slackware 14.2 x64
Posts: 308

Rep: Reputation: 70
Quote:
Originally Posted by polch View Post
Hi i've posted in the installation subforum about this problem (Install on raid setup and /dev/md127), but i had no feedback.

My raid devices are numbered from /dev/md127 to /dev/md126 (instead of /dev/md2 to /dev/md3). I understand the reasons of this numbering but i would like to know how to deal with this numbering on slackware :

(1) i have changed the fstab to use /dev/md/slackware:X instead of /dev/mdX
(2) i could change the mdadm.conf
(3) i could create the array with an option to force the numbering (i have tried with -name but it doesn't change anything...)

What solution would you suggest please ?

Regards.

Paul.
I was upgrading storage on a small server last week (Slack 13.37) and when I created the partitions and raid1 arrays I ended up with raid names like /dev/md0p1 and /dev/md1p1 etc.
They kept coming back with these names but when I chrooted to the root filesystem they were not found there in /dev.
I was getting frustrated and wasted many hours on this, goggled many sites and read a lot but never fount a conclusive solution.
I ended up wiping everything and started from scratch with partitioning.
One thing I ended up doing after booting from a Slack install dvd, was to stop the auto-detected arrays and reassembling them as /dev/md0 and /dev/md1 etc.
It seemed to stick and after that all was ok.
All my raid activities were guided by this how-to
http://slackware.com/~amrit/README_RAID.TXT

Sorry I don't have a definite answer for you. If I find more info on this topic then I'll post it here.
 
Old 10-09-2012, 07:17 AM   #4
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Oh so many things are wrong here.

Some pointers

1. the auto-detect partition type fd is a part of your problem, it is deprecated and should not be used and is the reason why your seeing one of your raid arrays come up but not the others
2. Proper RAID startup now requires an initrd as per kikinovak's suggestion
3. Device names are given when you create or assemble arrays, when you create your arrays you should give /dev/md/root or /dev/md/swap (those are suggestions and you can change them to suit what you want to see)
4. --name does not set the device name

So for anyone following the README_RAID.TXT do not use partition type fd or you will have all sorts of grief and confusion as it does not work properly any more, the correct partition type is da
 
Old 10-10-2012, 01:55 AM   #5
polch
LQ Newbie
 
Registered: Sep 2010
Posts: 22

Original Poster
Rep: Reputation: 0
Thank you for your replies.

So instead of doing
Code:
Ensure that the partition type is Linux RAID Autodetect (type FD).
we should
Code:
Ensure that the partition type is Linux RAID Autodetect (type DA).
.

And instead of doing
Code:
mdadm --create /dev/md2 --level 1 --raid-devices 2 /dev/sda2 /dev/sdb2
we should do
Code:
mdadm --create /dev/md/swap --level 1 --raid-devices 2 /dev/sda2 /dev/sdb2
for instance.
Should we apply the same naming scheme to --metadata=0.90 arrays ? Is lilo accept boot=/dev/md/boot ?


I can't contact the author of the README_RAID.TXT. How can we upgrade this doc for the next release ?

Quote:
I read through the link you copied in your message. Your setup looks fine, except for one vital point that seems to be the cause of your problem: there's no mention of setting up LILO for actually using your RAID array.
Yes i've setup lilo as said in the doc. My system boot on my raid devices and all is ok. I don't use initrd.
 
Old 10-10-2012, 02:06 AM   #6
kikinovak
MLED Founder
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: CentOS, OpenSUSE
Posts: 3,453

Rep: Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154
I noticed one crucial thing, though. Please correct me if I'm wrong. A couple of days ago I setup a server with Slackware 14.0. Four 250 GB hard disks, to be configured as a RAID 5 array. I setup things as usual and... couldn't boot it. The error message on boot time told me that /dev/md3 (where my root partition was supposed to be) was faulty and couldn't be mounted. (One of these moments where you dream of opening a boat repair shop on an island without electricity somewhere in the Pacific...)

After an unnerving afternoon, I found the solution. During the installation, once I created the RAID arrays, I simply let the synchronization process finish (which took about an hour and a half). Then only I proceeded with the installation, setting up the initrd and configuring LILO. The order may not be important here, but here's what I conclude from my various experiments: RAID 5 arrays must be synchronized (that is, cat /proc/mdstat showing a nice [UUUU]) before the first reboot, otherwise they won't come up.
 
Old 10-10-2012, 02:24 AM   #7
TracyTiger
Member
 
Registered: Apr 2011
Location: California, USA
Distribution: Slackware
Posts: 528

Rep: Reputation: 273Reputation: 273Reputation: 273
Quote:
Originally Posted by Slackovado View Post
I was upgrading storage on a small server last week (Slack 13.37) and when I created the partitions and raid1 arrays I ended up with raid names like /dev/md0p1 and /dev/md1p1 etc.
They kept coming back with these names but when I chrooted to the root filesystem they were not found there in /dev.
This sounds similar to a problem I had and was unable to solve a few months ago:
http://www.linuxquestions.org/questi...-a-4175418890/

I had read somewhere that RAID devices could now be partitioned. I tried, but the encrypted devices would not be remapped after a reboot.

Partitioning a disk then creating LUKS encrypted RAID devices upon which file systems can be created works fine.

However, creating a LUKS encrypted RAID device then partitioning that device and building a file system on the resulting partition appears to work but doesn't make it through a reboot. I never found a solution.

Last edited by TracyTiger; 10-10-2012 at 02:30 AM. Reason: Added that the devices were encrypted
 
Old 10-17-2012, 11:52 PM   #8
pdi
Member
 
Registered: May 2008
Posts: 50

Rep: Reputation: 59
--auto

Quote:
we should do
Code:
mdadm --create /dev/md/swap --level 1 --raid-devices 2 /dev/sda2 /dev/sdb2
According to the raid wiki, if you use non-standard device names, include the option
Code:
--auto=md
for non-partitionable arrays, or
Code:
--auto=mdp
for partitionable arrays.
 
Old 10-19-2012, 03:58 AM   #9
Slackovado
Member
 
Registered: Mar 2005
Location: BC, Canada
Distribution: Slackware 14.2 x64
Posts: 308

Rep: Reputation: 70
Quote:
Originally Posted by pdi View Post
According to the raid wiki, if you use non-standard device names, include the option
Code:
--auto=md
for non-partitionable arrays, or
Code:
--auto=mdp
for partitionable arrays.
I've tried it, didn't work.
The kernel still assembled partition-able arrays.
 
Old 10-19-2012, 09:14 AM   #10
polch
LQ Newbie
 
Registered: Sep 2010
Posts: 22

Original Poster
Rep: Reputation: 0
Moreover, the man mdadm says it isn't useful with recent version plus udev... I don't understand why, but i do as the man says.
 
Old 11-08-2012, 09:50 AM   #11
lokutus25
LQ Newbie
 
Registered: Feb 2011
Posts: 7

Rep: Reputation: 1
I did my tests and I found the solution, at least in my case.
Problem seems located when the booting kernel, assemble the previous created raid. He has to associate the raid device name to the uuid of the raid device.
In the previous releases of mdadm this was based on the super-minor number directly written on the raid superblock, with the metadata=0.90.
With the latest releases, the metadata=1.2 is the default and super-minor seems deprecated. No use to force the raid metadata to the 0.90 version, super-minor is ignored.
The association between device name and uuid then is made in mdadm.conf and has to be configured and embedded in the initramfs of the kernel.
In short:
# mdadm --detail --scan >> /etc/mdadm/mdadm.conf (this add the raid description, one per row).
# update-initramfs -u (in ubuntu this command update the content of the initramfs with the mdmadm.conf file)

This did the trick for me. It worked even booting a live and managing the raid in a chrooted environment.
Hope this help.
 
Old 11-09-2012, 04:47 AM   #12
polch
LQ Newbie
 
Registered: Sep 2010
Posts: 22

Original Poster
Rep: Reputation: 0
Yes, i do it like you.

Also be carefull to use FD partition type for the boot partition (it will be autoassembled). The DA partition type won't be autoassembled (and will work with the "mdadm --detail --scan >> /etc/mdadm/mdadm.conf".
 
Old 01-24-2013, 05:34 PM   #13
mRgOBLIN
Slackware Contributor
 
Registered: Jun 2002
Location: New Zealand
Distribution: Slackware
Posts: 999

Rep: Reputation: 231Reputation: 231Reputation: 231
Quote:
Originally Posted by polch View Post
Yes, i do it like you.

Also be carefull to use FD partition type for the boot partition (it will be autoassembled). The DA partition type won't be autoassembled (and will work with the "mdadm --detail --scan >> /etc/mdadm/mdadm.conf".
After some research and testing it looks like wildwizard is correct in everything other than neglecting to mention the use of --metadata=0.90 if you intend to boot from a partition using lilo.

Using an initrd Type DA does get auto assembled even without the use of an mdadm.conf.

I'll do some further testing and talk with the team and see if we'll make any other changes.
 
Old 01-25-2013, 02:33 PM   #14
polch
LQ Newbie
 
Registered: Sep 2010
Posts: 22

Original Poster
Rep: Reputation: 0
For the root partition type (DA or FD), the documentation could expose both cases (initrd and no initrd).
 
Old 01-25-2013, 06:53 PM   #15
chemfire
Member
 
Registered: Sep 2012
Posts: 422

Rep: Reputation: Disabled
If you don't want to boot with an initrd and you are still using arrays with the parition type set to fd and V0.9 meta data the simple solution is just to mount the volumes by UUID.

An fstab entry might look like:

UUID=41c22818-fbad-4da6-8196-b816df0b7aa8 /boot ext3 defaults 0 0

you can determine the UUIDs by running blkid. Personally I still don't like dealing with initrds on my systems. I think its a needless complexity in most cases.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] fstab for usb devices in Squeeze mark_alfred Debian 3 07-25-2011 12:39 AM
SW RAID - Raid devices larger than Total Devices tkrin Linux - Server 2 12-10-2010 11:01 AM
fstab and mounting devices nass Slackware 2 07-11-2007 08:18 AM
fstab and loop devices MBH Linux - Hardware 3 10-21-2004 08:58 AM
fstab and devices (making) TheBman Linux - Hardware 14 12-31-2003 09:55 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 05:59 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration