md0 becomes md127
Installed Slack64-current from Eric's 1.3.19 iso (thank you Eric) on an mdadm raid 1 array (two 3TB drives). md0 lives on /dev/sda2 /dev/sdb2. Installed grub on sda and sdb and created an /etc/mdadm.conf file which looks proper. Rebooted (array was still syncing) and I get the grub prompt, select 4.19.26 kernel, boot starts with the large font, screen goes black after a bit and boot info is displayed on a smaller font. Boot continues briefly and then terminates with an error stating that it can't run fsck on the boot partition. Take offer to go to repair mode and look in /dev and strangely there is no md0 any longer and the ubiquitous md127 has appeared which explains the error.
Boot Knoppix 8.1, chroot to the broken system. mdadm.conf looks fine. Anyone know how I get md0 back??? I've done a lot of searching and everything I try goes belly up. Have come across references to update-initramfs -u but Slackware doesn't have one. I'm at my wits end. Thanks in advance. |
From the default generated mdadm.conf that resulted in my devices names taking the form md12x, they were changed to the form md[0-n] by modifying mdadm.conf to take the following form:
Code:
HOMEHOST <ignore> |
I had this problem on several occasions. I used Lilo with a small RAID 1 partition for /boot and never seemed to get it right. Couldn't figure out a definitive solution to this md127 re-assignment problem.
Here is how I do mdadm RAID1 on the system disk now. It entailed moving from Lilo to the monstrosity that is Grub, and to a different partitioning scheme, but it has not failed me since. Create a BIOS Boot Partition (type EF02) on each disk (I have Legacy BIOS selected in the UEFI firmware, and I use GPT partitioning). 2MB is supposed to be enough but I leave it at 4. (As far as I remember the BIOS Boot Partition type shows up in gdisk/cgdisk only if you choose GPT partitioning. I'm not aware of a compelling reason to prefer MBR over GPT these days anyway.) If you don't want to use LVM then you need to create separate RAID partitions if you want to separate /home and / (but don't create a separate RAID 1 partition for /boot). If you intend to use LVM on top of mdadm, it is sufficient to fill the remainder of each disk with just a single partition for your array. Assign type Linux Filesystem (type 8300) to this partition - no need for RAID Autodetect. Make sure each disk has exactly the same partitioning scheme. Now create your RAID 1 array: Code:
# Long: Now create your Logical volumes: Code:
pvcreate /dev/md0 At the end exit the installer and chroot into /mnt cd to /boot and create a small initrd script, using Eric's script: Code:
/usr/share/mkinitrd/mkinitrd_command_generator.sh > mkinitrd.sh If everything is OK go ahead and run the script to create your initrd: Code:
bash mkinitrd.sh Code:
cd /boot Code:
grub-install /dev/sda Code:
GRUB_DEFAULT="1>6" Now run grub-mkconfig: Code:
grub-mkconfig -o /boot/grub/grub.cfg Remember to add the -k parameter to Eric's script if, at some point, you install a new kernel: Code:
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.19.27 > mkinitrd.sh |
3rensho:
- use UUID in /etc/fstab - it always works - if you are using intramfs, you need to rebuild it to put mdadm.conf into it Gerard Lally: In my opinion it's better to make config: Code:
/usr/share/mkinitrd/mkinitrd_command_generator.sh -c >/etc/mkinitrd.conf Code:
mkinitrd -F |
Thank you all for your responses. Much appreciated. I'll start working thru them and report back.
|
As @majekw said, you could use a UUID - but really then you end up with very cryptic and unreadable fstab files. Just label the partitions uniquely and then use /dev/disk/by-label/xxx where xxx is something like "ROOTMIRROR", or "RAID5MEDIA" etc
MY own fstab (unedited - presented as-is) : cat /etc/fstab /dev/disk/by-label/EVO860 / ext4 defaults,lazytime,noatime 1 1 /dev/disk/by-label/CADDYROOT /caddyroot ext4 defaults,lazytime,noatime 1 2 /dev/disk/by-label/CADDYHOME /caddyhome ext4 defaults,lazytime,noatime 1 2 #/dev/cdrom /mnt/cdrom auto noauto,owner,ro,comment=x-gvfs-show 0 0 /dev/fd0 /mnt/floppy auto noauto,owner 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 proc /proc proc defaults 0 0 tmpfs /dev/shm tmpfs nosuid,nodev,noexec 0 0 the "caddy" names are for the SSD is stuck into the old CD-caddy tray. |
Quote:
Code:
LABEL=EVO860 / ext4 defaults,lazytime,noatime 1 1 Code:
####### NVMe ####### |
Quote:
|
Quote:
EDIT: I should mention... it's not wrong to do labels in the fstab, just as it's not wrong to do UUIDs (or keeping the original device names, assuming they don't change). I am not trying to sway anyone a specific way, just providing information so people can make their own decision. I prefer using UUIDs in my fstab, because I feel it makes my workflow easier. That will not be the case for everyone. |
I had a similar problem some time ago. I blamed changing the kernel but I was not sure at the time and even less so now. However my solution was simple.
Code:
mdadm --stop /dev/md127 Dunc. |
If you are using an initrd, make sure that the mdadm.conf that's part of the initrd is either empty (the default file isn't empty) or contains the correct values for your array. Otherwise, udev will create the md device using the high numbers.
|
Quote:
I'm not sure why it got renamed to md127 in the first place, though, or whether it might get renamed again. In my case it's not the boot/root filesystem, so I can play silly scripting tricks to mount it as md0 or md127, but knowing the root cause would be nice. |
The reason is that since some version of mdadm, it checks 'homehost' of an array during assembly.
Hostname is set in Slackware during 'normal' boot after initramfs finishes, but arrays get assembled by mdadm in initramfs when hostame is still unknown. So, mdadm in initramfs assembling array have mismatch between hostname (still darkstar) and homehost part written in array metadata. Then it treat such mismatched array as 'foreign' and deny assigning proper device number. That's why putting mdadm.conf with array definitions into initramfs works (you just force mdadm to assemble array in exact way as specified in config). And that's why assembling array while system is normally running also works, because there is no more hostname/homehost mismatch. Probably (I didn't test it) should work also if mdadm.conf have only one line: Code:
HOMEHOST=yourrealhostname Code:
HOMEHOST=<ignore> |
Thanks, that gives me a place to start figuring it out. I'm not using an initramfs, so there must be something slightly different going on here.
|
Update:
First of all many thanks to all of you for the wealth of information you provided. After doing some more checking it is looking like there may be a hardware problem. I will try to ferret that out first before creating a raid array again. Again, thank you all for taking the time to respond. Will be back when hw is sorted. |
Quote:
The same happens to me I just use UUID's to take care of it. |
OK I'm back. Replaced a flakey disk, started over, created raid 1 (used metadata=1.2 this time had been using 0.90). Installed Slack64-current from Eric's 1.3.19 edition, copied over some scripts from my NAS and build latest kernel. Booted and it crapped out again, but some progress noted in that the fatal message about e2fsck not being found has changed from md127 to md0. So, some progress. Will try entering UUID in fstab to see it that works. Also noticed when building mdadm.conf that the raid is listed as /dev/md/0. Checked the last backup from my previously running -current and the mdadm.conf listed it as /dev/md/0_0. Does the metadata version make a difference in how the /dev/md is listed?
Edit: Noticed something else strange. The raid suddenly started rebuilding itself. If I do - file -s /dev/md0 /dev/md0: Linux rev 1.0 ext4 filesystem data, UUID=318fbe83-55e7-452e-b27b-e4acfafa20e4 (needs journal recovery) (extents) (large files) (huge files) but this UUID is very different from - mdadm --detail --scan /dev/md0 ARRAY /dev/md0 metadata=1.2 name=slackware:0 UUID=f80a1f11:1a0802bc:7ef257d5:87f46ad2 This too is disturbing - mdadm --examine /dev/md0 | grep UUID mdadm: No md superblock detected on /dev/md0. |
Finally got it working. Created /dev/md0 array after switching back to metadata 0.90. After rebooting everything runs fine but it still insists the array is md127. I don't care as I have the UUID entered in fstab so it can call itself what ever it wants. Thanks again to everyone for your ideas and suggestions.
|
All times are GMT -5. The time now is 01:15 AM. |