LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Raid auto-detection deprecated? What's next? (https://www.linuxquestions.org/questions/slackware-14/raid-auto-detection-deprecated-whats-next-4175459196/)

xj25vm 04-22-2013 06:11 PM

Raid auto-detection deprecated? What's next?
 
I've seen in several placed that the kernel raid autodetection is supposed to be deprecated. Does anybody have an authoritative source of information as to why this is done and the bigger picture? All I find is snippets on various forums or how-tos which only say that it is being deprecated - no further explanation. I thought things were a bit like that "in the beginning" - having to start arrays through init scripts - then the kernel got smarter and started to use auto-detection. Are we sort of going backwards somehow?

Also, under Slackware, does it mean that one has to absolutely start using initrd (and possibly a separate, non-raid /boot partition?) in order to use raid? I'm not sure I understand why something that was as simple as setting partitions to 'fd' and initially assembling the array on the command line - has to become more complicated - involving initrd? Can anybody shed a bit more light on what's going to happen? Is there a little tutorial specific to Slackware on this topic already somewhere out there?

Thanks

Richard Cranium 04-22-2013 09:15 PM

Well, this seemed pretty clear to me. The new superblock format(s) don't support the old autodetect flag.

TracyTiger 04-23-2013 12:37 AM

Quote:

Originally Posted by xj25vm (Post 4936832)
Also, under Slackware, does it mean that one has to absolutely start using initrd (and possibly a separate, non-raid /boot partition?) in order to use raid?

Talking about SOFTWARE RAID that comes with Slackware ...

/boot can be part of a RAID setup. Disconnect one drive and it boots from the other, and visa-versa. At least it works for me in mirrored or mirrored-stripped (RAID 10) setups.

I have several machines running with fd partitions but have more recently switched to da partitions after reading information posted by an LQ member.

I use initrd to remove guesswork. I don't remember if the necessary modules are built into the standard delivered kernel configurations (generic,huge) or not.

EDIT: I believe I only used RAID 1 with /boot, not RAID 10.

xj25vm 04-23-2013 12:38 AM

Thanks Richard Cranium. I also found the Slackware README_RAID here (doh!):

http://slackware.org.uk/slackware/sl...EADME_RAID.TXT

I guess it will just be a matter of following the instructions given for RAID 0 and RAID 5 when building RAID 1 as well, in the future?

Shouldn't the README be updated to recommend using initrd for all types of RAID, now that kernel auto-detection is going away - or is the general thinking that this feature will take a long time to be removed from the kernel, and it is still safe to build new RAID 1 arrays with auto-detection on?

saulgoode 04-23-2013 08:57 AM

Has there been any discussion more recent than six years' ago about autodetection being deprecated? Twenty-odd kernel releases later and it's still here.

I would say if autodetect suits your needs then use it.

xj25vm 04-23-2013 09:26 AM

Thanks saulgoode. I was actually thinking that removing raid autodetection in kernel would effectively be a step back in time. The /boot partition wouldn't be able to be part of RAID - which means it would be on a single disk - which means that if that disk goes down - the system becomes unbootable. At least partially defeats the point of having RAID, no?

I suppose with uefi at least, one could keep two system partitions on two disks, list them both in uefi, and include the initrd on the uefi system partitions - that should be workable to provide redundancy for the boot setup?

saulgoode 04-23-2013 10:19 AM

Quote:

Originally Posted by xj25vm (Post 4937232)
Thanks saulgoode. I was actually thinking that removing raid autodetection in kernel would effectively be a step back in time. The /boot partition wouldn't be able to be part of RAID - which means it would be on a single disk - which means that if that disk goes down - the system becomes unbootable. At least partially defeats the point of having RAID, no?

LILO should be able to handle a faulty component of a RAID 1 device with proper usage of the 'raid-extra-boot' option (depending upon your BIOS's capabilities, I don't know about UEFI).

xj25vm 04-23-2013 10:33 AM

I'm not sure the raid-extra-boot option in Lilo would be sufficient. If the kernel doesn't support raid autodetect anymore, the following should be a possible scenario for a RAID 1 setup:

1. Have a /boot partition on each hdd.
2. Copy the same files in both /boot partitions.
3. Tell Lilo to write one boot sector for the first hdd, pointing to the kernel or initrd on the /boot partition of the first disk.
4. Write a separate Lilo record on the boot sector of the second hard-disk, which points to the kernel or initrd stored on the boot partition of the second hdd.

However, even the above doesn't seem workable, as the entry in /etc/fstab corresponding to /boot would be pointing either to /dev/sda1 or /dev/sdb1 - as they are not part of raid. Thus in case of hdd failure, the /boot entry would become wrong. On the other hand, I guess Linux doesn't really need the /boot partition after it has booted - so although wrong, the system might just be able to finish booting?

gnashley 04-23-2013 12:01 PM

"/dev/sdb1 in fstab" Wanna know a little secret? You should be able to use '/dev/root' or maybe even 'rootfs' instead of a fixed path. You used to be able to use /dev/fd2 as a 'universal' fstab root entry.

xj25vm 04-23-2013 12:20 PM

In the example above, I was referring to the /boot entry - such as:

/dev/sda1 /boot ext4 defaults 1 2

I guess /dev/root wouldn't be of much help with that. Then again, on digging around - it seems that the /dev/root is setup using udev. I suppose one could write some clever udev rule, which would automatically link /dev/boot to /dev/sda1 or /dev/sdb1 - depending on which one is present - and solve things that way. Something to ponder about.

Thank goodness raid autodetect is not gone away yet. It seems there will be some head scratching involved when/if it goes away to make up for the lost functionality!

Richard Cranium 04-23-2013 07:32 PM

Quote:

Originally Posted by saulgoode (Post 4937215)
Has there been any discussion more recent than six years' ago about autodetection being deprecated? Twenty-odd kernel releases later and it's still here.

I would say if autodetect suits your needs then use it.

Well, be careful when you use it.

Quote:

Historically, when the kernel booted, it used a mechanism called 'autodetect' to identify partitions which are used in RAID arrays: it assumed that all partitions of type 0xfd are so used. It then attempted to automatically assemble and start these arrays.

This approach can cause problems in several situations (imagine moving part of an old array onto another machine before wiping and repurposing it: reboot and watch in horror as the piece of dead array gets assembled as part of the running RAID array, ruining it); kernel autodetect is correspondingly deprecated.
Since the recommended way to boot Slackware is to use an initrd with the generic kernel, it's not much of a stretch to just do things the recommended way. But it's your system to do with as you wish.

saulgoode 04-23-2013 11:55 PM

Quote:

(imagine moving part of an old array onto another machine before wiping and repurposing it: reboot and watch in horror as the piece of dead array gets assembled as part of the running RAID array, ruining it)
Except that is not what happens.

The Linux code says
Code:

/*
 * lets try to run arrays based on all disks that have arrived
 * until now. (those are in pending_raid_disks)
 *
 * the method: pick the first pending disk, collect all disks with
 * the same UUID, remove all from the pending list and put them into
 * the 'same_array' list. Then order this list based on superblock
 * update time (freshest comes first), kick out 'old' disks and
 * compare superblocks. If everything's fine then run it.
 *
 * If "unit" is allocated, then bump its reference count
 */

So unless there is a way to move a partition from one machine to another without moving the disk itself, I'd say the gloom and doom scenario presented is bunk. The worst that will happen is the minor device numbers (e.g., md0, md1, ...) might change, and I'm pretty sure that would be prevented by assigning those based on UUID in /etc/mdadm.conf.

wildwizard 04-24-2013 04:00 AM

I think you should look at this line :-
http://git.kernel.org/cgit/linux/ker...?id=v3.2#n5227

In case you don't understand what this does, it is the start of the deprecation process. Only if your RAID superblock is the old 0.9 version will it work, if you have a version 1.x then it will not be started by that code block.

saulgoode 04-24-2013 07:48 AM

I stand humbled and corrected. Thank you wildwizard and Richard Cranium.

xj25vm 04-24-2013 08:47 AM

I still think that it's a bit crazy - as it would leave no "official" way of having redundancy for the /boot partition - which is loss of current functionality. If my understanding is correct, as the kernel won't be able to read RAID 1 directly, the initrd, or initramfs would have to be on a regular /boot partition - not a raided one. That means it can't be protected by redundancy. I guess the best way to deal with it would have been to build the ability to manually specify the initial RAID 1 partition configuration to boot from either as command line parameters to the kernel, or to make the boot loader understand it. This way there wouldn't be any autodetection involved - but the kernel and initrd/initramfs would still be able to reside on a partition protected by RAID 1. If my understanding of the current situation is right, that is.


All times are GMT -5. The time now is 11:22 AM.