SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've seen in several placed that the kernel raid autodetection is supposed to be deprecated. Does anybody have an authoritative source of information as to why this is done and the bigger picture? All I find is snippets on various forums or how-tos which only say that it is being deprecated - no further explanation. I thought things were a bit like that "in the beginning" - having to start arrays through init scripts - then the kernel got smarter and started to use auto-detection. Are we sort of going backwards somehow?
Also, under Slackware, does it mean that one has to absolutely start using initrd (and possibly a separate, non-raid /boot partition?) in order to use raid? I'm not sure I understand why something that was as simple as setting partitions to 'fd' and initially assembling the array on the command line - has to become more complicated - involving initrd? Can anybody shed a bit more light on what's going to happen? Is there a little tutorial specific to Slackware on this topic already somewhere out there?
Also, under Slackware, does it mean that one has to absolutely start using initrd (and possibly a separate, non-raid /boot partition?) in order to use raid?
Talking about SOFTWARE RAID that comes with Slackware ...
/boot can be part of a RAID setup. Disconnect one drive and it boots from the other, and visa-versa. At least it works for me in mirrored or mirrored-stripped (RAID 10) setups.
I have several machines running with fd partitions but have more recently switched to da partitions after reading information posted by an LQ member.
I use initrd to remove guesswork. I don't remember if the necessary modules are built into the standard delivered kernel configurations (generic,huge) or not.
EDIT: I believe I only used RAID 1 with /boot, not RAID 10.
Last edited by TracyTiger; 04-23-2013 at 01:30 AM.
I guess it will just be a matter of following the instructions given for RAID 0 and RAID 5 when building RAID 1 as well, in the future?
Shouldn't the README be updated to recommend using initrd for all types of RAID, now that kernel auto-detection is going away - or is the general thinking that this feature will take a long time to be removed from the kernel, and it is still safe to build new RAID 1 arrays with auto-detection on?
Has there been any discussion more recent than six years' ago about autodetection being deprecated? Twenty-odd kernel releases later and it's still here.
I would say if autodetect suits your needs then use it.
Thanks saulgoode. I was actually thinking that removing raid autodetection in kernel would effectively be a step back in time. The /boot partition wouldn't be able to be part of RAID - which means it would be on a single disk - which means that if that disk goes down - the system becomes unbootable. At least partially defeats the point of having RAID, no?
I suppose with uefi at least, one could keep two system partitions on two disks, list them both in uefi, and include the initrd on the uefi system partitions - that should be workable to provide redundancy for the boot setup?
Thanks saulgoode. I was actually thinking that removing raid autodetection in kernel would effectively be a step back in time. The /boot partition wouldn't be able to be part of RAID - which means it would be on a single disk - which means that if that disk goes down - the system becomes unbootable. At least partially defeats the point of having RAID, no?
LILO should be able to handle a faulty component of a RAID 1 device with proper usage of the 'raid-extra-boot' option (depending upon your BIOS's capabilities, I don't know about UEFI).
I'm not sure the raid-extra-boot option in Lilo would be sufficient. If the kernel doesn't support raid autodetect anymore, the following should be a possible scenario for a RAID 1 setup:
1. Have a /boot partition on each hdd.
2. Copy the same files in both /boot partitions.
3. Tell Lilo to write one boot sector for the first hdd, pointing to the kernel or initrd on the /boot partition of the first disk.
4. Write a separate Lilo record on the boot sector of the second hard-disk, which points to the kernel or initrd stored on the boot partition of the second hdd.
However, even the above doesn't seem workable, as the entry in /etc/fstab corresponding to /boot would be pointing either to /dev/sda1 or /dev/sdb1 - as they are not part of raid. Thus in case of hdd failure, the /boot entry would become wrong. On the other hand, I guess Linux doesn't really need the /boot partition after it has booted - so although wrong, the system might just be able to finish booting?
"/dev/sdb1 in fstab" Wanna know a little secret? You should be able to use '/dev/root' or maybe even 'rootfs' instead of a fixed path. You used to be able to use /dev/fd2 as a 'universal' fstab root entry.
In the example above, I was referring to the /boot entry - such as:
/dev/sda1 /boot ext4 defaults 1 2
I guess /dev/root wouldn't be of much help with that. Then again, on digging around - it seems that the /dev/root is setup using udev. I suppose one could write some clever udev rule, which would automatically link /dev/boot to /dev/sda1 or /dev/sdb1 - depending on which one is present - and solve things that way. Something to ponder about.
Thank goodness raid autodetect is not gone away yet. It seems there will be some head scratching involved when/if it goes away to make up for the lost functionality!
Has there been any discussion more recent than six years' ago about autodetection being deprecated? Twenty-odd kernel releases later and it's still here.
I would say if autodetect suits your needs then use it.
Historically, when the kernel booted, it used a mechanism called 'autodetect' to identify partitions which are used in RAID arrays: it assumed that all partitions of type 0xfd are so used. It then attempted to automatically assemble and start these arrays.
This approach can cause problems in several situations (imagine moving part of an old array onto another machine before wiping and repurposing it: reboot and watch in horror as the piece of dead array gets assembled as part of the running RAID array, ruining it); kernel autodetect is correspondingly deprecated.
Since the recommended way to boot Slackware is to use an initrd with the generic kernel, it's not much of a stretch to just do things the recommended way. But it's your system to do with as you wish.
(imagine moving part of an old array onto another machine before wiping and repurposing it: reboot and watch in horror as the piece of dead array gets assembled as part of the running RAID array, ruining it)
/*
* lets try to run arrays based on all disks that have arrived
* until now. (those are in pending_raid_disks)
*
* the method: pick the first pending disk, collect all disks with
* the same UUID, remove all from the pending list and put them into
* the 'same_array' list. Then order this list based on superblock
* update time (freshest comes first), kick out 'old' disks and
* compare superblocks. If everything's fine then run it.
*
* If "unit" is allocated, then bump its reference count
*/
So unless there is a way to move a partition from one machine to another without moving the disk itself, I'd say the gloom and doom scenario presented is bunk. The worst that will happen is the minor device numbers (e.g., md0, md1, ...) might change, and I'm pretty sure that would be prevented by assigning those based on UUID in /etc/mdadm.conf.
In case you don't understand what this does, it is the start of the deprecation process. Only if your RAID superblock is the old 0.9 version will it work, if you have a version 1.x then it will not be started by that code block.
I still think that it's a bit crazy - as it would leave no "official" way of having redundancy for the /boot partition - which is loss of current functionality. If my understanding is correct, as the kernel won't be able to read RAID 1 directly, the initrd, or initramfs would have to be on a regular /boot partition - not a raided one. That means it can't be protected by redundancy. I guess the best way to deal with it would have been to build the ability to manually specify the initial RAID 1 partition configuration to boot from either as command line parameters to the kernel, or to make the boot loader understand it. This way there wouldn't be any autodetection involved - but the kernel and initrd/initramfs would still be able to reside on a partition protected by RAID 1. If my understanding of the current situation is right, that is.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.