RAID devices numbering and their designation in fstab.
Hi i've posted in the installation subforum about this problem (Install on raid setup and /dev/md127), but i had no feedback.
My raid devices are numbered from /dev/md127 to /dev/md126 (instead of /dev/md2 to /dev/md3). I understand the reasons of this numbering but i would like to know how to deal with this numbering on slackware : (1) i have changed the fstab to use /dev/md/slackware:X instead of /dev/mdX (2) i could change the mdadm.conf (3) i could create the array with an option to force the numbering (i have tried with -name but it doesn't change anything...) What solution would you suggest please ? Regards. Paul. |
The root of the Slackware file tree contains an excellent document about setting up RAID:
http://slackware.bokxing-it.nl/mirro...EADME_RAID.TXT I read through the link you copied in your message. Your setup looks fine, except for one vital point that seems to be the cause of your problem: there's no mention of setting up LILO for actually using your RAID array. Your mileage may vary, but when I install a Slackware server using software RAID, I'm doing two things after exiting the installer (EXIT option in the installer menu). Chroot into the newly installed Slackware system: Code:
# chroot /mnt Code:
RAID="1" Then, my lilo.conf looks something like this: Code:
... Code:
# lilo Code:
# exit |
Quote:
They kept coming back with these names but when I chrooted to the root filesystem they were not found there in /dev. I was getting frustrated and wasted many hours on this, goggled many sites and read a lot but never fount a conclusive solution. I ended up wiping everything and started from scratch with partitioning. One thing I ended up doing after booting from a Slack install dvd, was to stop the auto-detected arrays and reassembling them as /dev/md0 and /dev/md1 etc. It seemed to stick and after that all was ok. All my raid activities were guided by this how-to http://slackware.com/~amrit/README_RAID.TXT Sorry I don't have a definite answer for you. If I find more info on this topic then I'll post it here. |
Oh so many things are wrong here.
Some pointers 1. the auto-detect partition type fd is a part of your problem, it is deprecated and should not be used and is the reason why your seeing one of your raid arrays come up but not the others 2. Proper RAID startup now requires an initrd as per kikinovak's suggestion 3. Device names are given when you create or assemble arrays, when you create your arrays you should give /dev/md/root or /dev/md/swap (those are suggestions and you can change them to suit what you want to see) 4. --name does not set the device name So for anyone following the README_RAID.TXT do not use partition type fd or you will have all sorts of grief and confusion as it does not work properly any more, the correct partition type is da |
Thank you for your replies.
So instead of doing Code:
Ensure that the partition type is Linux RAID Autodetect (type FD). Code:
Ensure that the partition type is Linux RAID Autodetect (type DA). And instead of doing Code:
mdadm --create /dev/md2 --level 1 --raid-devices 2 /dev/sda2 /dev/sdb2 Code:
mdadm --create /dev/md/swap --level 1 --raid-devices 2 /dev/sda2 /dev/sdb2 Should we apply the same naming scheme to --metadata=0.90 arrays ? Is lilo accept boot=/dev/md/boot ? I can't contact the author of the README_RAID.TXT. How can we upgrade this doc for the next release ? Quote:
|
I noticed one crucial thing, though. Please correct me if I'm wrong. A couple of days ago I setup a server with Slackware 14.0. Four 250 GB hard disks, to be configured as a RAID 5 array. I setup things as usual and... couldn't boot it. The error message on boot time told me that /dev/md3 (where my root partition was supposed to be) was faulty and couldn't be mounted. (One of these moments where you dream of opening a boat repair shop on an island without electricity somewhere in the Pacific...)
After an unnerving afternoon, I found the solution. During the installation, once I created the RAID arrays, I simply let the synchronization process finish (which took about an hour and a half). Then only I proceeded with the installation, setting up the initrd and configuring LILO. The order may not be important here, but here's what I conclude from my various experiments: RAID 5 arrays must be synchronized (that is, cat /proc/mdstat showing a nice [UUUU]) before the first reboot, otherwise they won't come up. |
Quote:
http://www.linuxquestions.org/questi...-a-4175418890/ I had read somewhere that RAID devices could now be partitioned. I tried, but the encrypted devices would not be remapped after a reboot. Partitioning a disk then creating LUKS encrypted RAID devices upon which file systems can be created works fine. However, creating a LUKS encrypted RAID device then partitioning that device and building a file system on the resulting partition appears to work but doesn't make it through a reboot. I never found a solution. |
--auto
Quote:
Code:
--auto=md Code:
--auto=mdp |
Quote:
The kernel still assembled partition-able arrays. |
Moreover, the man mdadm says it isn't useful with recent version plus udev... I don't understand why, but i do as the man says.
|
I did my tests and I found the solution, at least in my case.
Problem seems located when the booting kernel, assemble the previous created raid. He has to associate the raid device name to the uuid of the raid device. In the previous releases of mdadm this was based on the super-minor number directly written on the raid superblock, with the metadata=0.90. With the latest releases, the metadata=1.2 is the default and super-minor seems deprecated. No use to force the raid metadata to the 0.90 version, super-minor is ignored. The association between device name and uuid then is made in mdadm.conf and has to be configured and embedded in the initramfs of the kernel. In short: # mdadm --detail --scan >> /etc/mdadm/mdadm.conf (this add the raid description, one per row). # update-initramfs -u (in ubuntu this command update the content of the initramfs with the mdmadm.conf file) This did the trick for me. It worked even booting a live and managing the raid in a chrooted environment. Hope this help. |
Yes, i do it like you.
Also be carefull to use FD partition type for the boot partition (it will be autoassembled). The DA partition type won't be autoassembled (and will work with the "mdadm --detail --scan >> /etc/mdadm/mdadm.conf". |
Quote:
Using an initrd Type DA does get auto assembled even without the use of an mdadm.conf. I'll do some further testing and talk with the team and see if we'll make any other changes. |
For the root partition type (DA or FD), the documentation could expose both cases (initrd and no initrd).
|
If you don't want to boot with an initrd and you are still using arrays with the parition type set to fd and V0.9 meta data the simple solution is just to mount the volumes by UUID.
An fstab entry might look like: UUID=41c22818-fbad-4da6-8196-b816df0b7aa8 /boot ext3 defaults 0 0 you can determine the UUIDs by running blkid. Personally I still don't like dealing with initrds on my systems. I think its a needless complexity in most cases. |
All times are GMT -5. The time now is 07:19 PM. |