LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 03-04-2019, 09:42 AM   #1
3rensho
Member
 
Registered: Mar 2008
Location: Switzerland
Distribution: Slackware64
Posts: 205

Rep: Reputation: 10
md0 becomes md127


Installed Slack64-current from Eric's 1.3.19 iso (thank you Eric) on an mdadm raid 1 array (two 3TB drives). md0 lives on /dev/sda2 /dev/sdb2. Installed grub on sda and sdb and created an /etc/mdadm.conf file which looks proper. Rebooted (array was still syncing) and I get the grub prompt, select 4.19.26 kernel, boot starts with the large font, screen goes black after a bit and boot info is displayed on a smaller font. Boot continues briefly and then terminates with an error stating that it can't run fsck on the boot partition. Take offer to go to repair mode and look in /dev and strangely there is no md0 any longer and the ubiquitous md127 has appeared which explains the error.

Boot Knoppix 8.1, chroot to the broken system. mdadm.conf looks fine.

Anyone know how I get md0 back??? I've done a lot of searching and everything I try goes belly up. Have come across references to update-initramfs -u but Slackware doesn't have one.

I'm at my wits end. Thanks in advance.
 
Old 03-04-2019, 02:14 PM   #2
mrmazda
Senior Member
 
Registered: Aug 2016
Location: USA
Distribution: openSUSE, Debian, Knoppix, Mageia, Fedora, others
Posts: 1,334

Rep: Reputation: 391Reputation: 391Reputation: 391Reputation: 391
From the default generated mdadm.conf that resulted in my devices names taking the form md12x, they were changed to the form md[0-n] by modifying mdadm.conf to take the following form:
Code:
HOMEHOST <ignore>
DEVICE containers partitions
ARRAY /dev/md1 metadata=1.0 name=hostname:filesystemlabel UUID=1234...
HTH
 
Old 03-04-2019, 02:35 PM   #3
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Brú na Bóinne, IE
Distribution: Slackware, NetBSD
Posts: 1,520

Rep: Reputation: 984Reputation: 984Reputation: 984Reputation: 984Reputation: 984Reputation: 984Reputation: 984Reputation: 984
I had this problem on several occasions. I used Lilo with a small RAID 1 partition for /boot and never seemed to get it right. Couldn't figure out a definitive solution to this md127 re-assignment problem.

Here is how I do mdadm RAID1 on the system disk now. It entailed moving from Lilo to the monstrosity that is Grub, and to a different partitioning scheme, but it has not failed me since.

Create a BIOS Boot Partition (type EF02) on each disk (I have Legacy BIOS selected in the UEFI firmware, and I use GPT partitioning). 2MB is supposed to be enough but I leave it at 4.

(As far as I remember the BIOS Boot Partition type shows up in gdisk/cgdisk only if you choose GPT partitioning. I'm not aware of a compelling reason to prefer MBR over GPT these days anyway.)

If you don't want to use LVM then you need to create separate RAID partitions if you want to separate /home and / (but don't create a separate RAID 1 partition for /boot).

If you intend to use LVM on top of mdadm, it is sufficient to fill the remainder of each disk with just a single partition for your array. Assign type Linux Filesystem (type 8300) to this partition - no need for RAID Autodetect. Make sure each disk has exactly the same partitioning scheme.

Now create your RAID 1 array:

Code:
# Long:
mdadm --create /dev/md0 --name=mdsystem --level=1 \
    --raid-devices=2 /dev/sda1 /dev/sdb1
# Short:
mdadm -C /dev/md0 -N mdsystem -l 1 \
    -n 2 /dev/sda1 /dev/sdb1
Go ahead and accept metadata=1.2 instead of the older 0.90 for this.

Now create your Logical volumes:

Code:
pvcreate /dev/md0
vgcreate volgroup00 /dev/md0
lvcreate -L 8G -n lvswap volgroup00
lvcreate -L 72G -n lvroot volgroup00
lvcreate -l 100%FREE -n lvhome volgroup00

mkswap /dev/mapper/volgroup00-lvswap
swapon /dev/mapper/volgroup00-lvswap
Install as normal. Do not install Lilo.

At the end exit the installer and chroot into /mnt

cd to /boot and create a small initrd script, using Eric's script:

Code:
/usr/share/mkinitrd/mkinitrd_command_generator.sh > mkinitrd.sh
(Creating the mkinitrd.sh script allows you to inspect the generated command, to make sure -R for raid and -L for LVM are included.)

If everything is OK go ahead and run the script to create your initrd:

Code:
bash mkinitrd.sh
I use the generic kernel as well:

Code:
cd /boot
rm System.map vmlinuz
ln -s System.map-generic-$(uname -r) System.map
ln -s vmlinuz-generic-$(uname -r) vmlinuz
Now install grub on each disk:

Code:
grub-install /dev/sda
grub-install /dev/sdb
In /etc/default/grub you can choose the entry you want to boot by default:

Code:
GRUB_DEFAULT="1>6"
That is, the 2nd entry in the Grub main menu, and the 7th entry in the sub-menu.

Now run grub-mkconfig:
Code:
grub-mkconfig -o /boot/grub/grub.cfg
That should be enough to get your system up and running.

Remember to add the -k parameter to Eric's script if, at some point, you install a new kernel:

Code:
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.19.27 > mkinitrd.sh

Last edited by Gerard Lally; 03-04-2019 at 03:44 PM.
 
1 members found this post helpful.
Old 03-04-2019, 06:01 PM   #4
majekw
LQ Newbie
 
Registered: May 2011
Distribution: Slackware
Posts: 11

Rep: Reputation: 18
3rensho:
- use UUID in /etc/fstab - it always works
- if you are using intramfs, you need to rebuild it to put mdadm.conf into it

Gerard Lally:
In my opinion it's better to make config:
Code:
/usr/share/mkinitrd/mkinitrd_command_generator.sh -c >/etc/mkinitrd.conf
Then just simply
Code:
mkinitrd -F
to build/rebuild it. No special scripts, just use already available option :-)

Last edited by majekw; 03-04-2019 at 06:10 PM.
 
2 members found this post helpful.
Old 03-04-2019, 11:21 PM   #5
3rensho
Member
 
Registered: Mar 2008
Location: Switzerland
Distribution: Slackware64
Posts: 205

Original Poster
Rep: Reputation: 10
Thank you all for your responses. Much appreciated. I'll start working thru them and report back.
 
Old 03-05-2019, 12:51 AM   #6
Mark Pettit
Member
 
Registered: Dec 2008
Location: Cape Town, South Africa
Distribution: Slackware 14.2 64 Multi-Lib
Posts: 523

Rep: Reputation: 210Reputation: 210Reputation: 210
As @majekw said, you could use a UUID - but really then you end up with very cryptic and unreadable fstab files. Just label the partitions uniquely and then use /dev/disk/by-label/xxx where xxx is something like "ROOTMIRROR", or "RAID5MEDIA" etc

MY own fstab (unedited - presented as-is) :
cat /etc/fstab
/dev/disk/by-label/EVO860 / ext4 defaults,lazytime,noatime 1 1
/dev/disk/by-label/CADDYROOT /caddyroot ext4 defaults,lazytime,noatime 1 2
/dev/disk/by-label/CADDYHOME /caddyhome ext4 defaults,lazytime,noatime 1 2
#/dev/cdrom /mnt/cdrom auto noauto,owner,ro,comment=x-gvfs-show 0 0
/dev/fd0 /mnt/floppy auto noauto,owner 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
proc /proc proc defaults 0 0
tmpfs /dev/shm tmpfs nosuid,nodev,noexec 0 0

the "caddy" names are for the SSD is stuck into the old CD-caddy tray.

Last edited by Mark Pettit; 03-05-2019 at 12:53 AM.
 
Old 03-05-2019, 02:52 AM   #7
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 5,874

Rep: Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585
Quote:
Originally Posted by Mark Pettit View Post
As @majekw said, you could use a UUID - but really then you end up with very cryptic and unreadable fstab files. Just label the partitions uniquely and then use /dev/disk/by-label/xxx where xxx is something like "ROOTMIRROR", or "RAID5MEDIA" etc

MY own fstab (unedited - presented as-is) :
cat /etc/fstab
/dev/disk/by-label/EVO860 / ext4 defaults,lazytime,noatime 1 1
/dev/disk/by-label/CADDYROOT /caddyroot ext4 defaults,lazytime,noatime 1 2
/dev/disk/by-label/CADDYHOME /caddyhome ext4 defaults,lazytime,noatime 1 2
#/dev/cdrom /mnt/cdrom auto noauto,owner,ro,comment=x-gvfs-show 0 0
/dev/fd0 /mnt/floppy auto noauto,owner 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
proc /proc proc defaults 0 0
tmpfs /dev/shm tmpfs nosuid,nodev,noexec 0 0

the "caddy" names are for the SSD is stuck into the old CD-caddy tray.
You can simplify that a bit more by using LABEL=your_label_here instead of using /dev/disk/by-label/

Code:
LABEL=EVO860          /                ext4        defaults,lazytime,noatime           1   1
LABEL=CADDYROOT       /caddyroot       ext4        defaults,lazytime,noatime           1   2
LABEL=CADDYHOME       /caddyhome       ext4        defaults,lazytime,noatime           1   2
#/dev/cdrom           /mnt/cdrom       auto        noauto,owner,ro,comment=x-gvfs-show 0   0
/dev/fd0              /mnt/floppy      auto        noauto,owner                        0   0
devpts                /dev/pts         devpts      gid=5,mode=620                      0   0
proc                  /proc            proc        defaults                            0   0
tmpfs                 /dev/shm         tmpfs       nosuid,nodev,noexec                 0   0
But in reality, the fstab is only as cryptic as you make it. Sure, UUIDs don't specify what the drive is, but your mount points or comments should help with that. I personally can't use labels unless I'm willing to use temporary labels as I'm swapping out drives. I much prefer UUIDs for my use case. Here's my /etc/fstab:

Code:
####### NVMe #######
UUID=29a036b1-2cd0-4f06-b831-e72e6212106c   swap                    swap     defaults                        0   0
UUID=76e5697b-96f0-4287-8c1a-2b6e6961765a   /                       ext4     defaults,noatime,nodiratime     0   1
UUID=c22ddd38-188b-482e-a873-98e12a0acef1   /home                   ext4     defaults,noatime,nodiratime     0   2

###### HD ######
UUID=afea7ab4-0ea4-44da-8262-21e9b41deb69   /share/gothrough        ext4     defaults                        0   2
UUID=48be7001-0df5-4880-b48d-6ecdb9ef3d75   /share/movies           ext4     defaults                        0   2
UUID=5f466dd0-2434-46ba-a276-6eee7647da9d   /share/music            ext4     defaults                        0   2
UUID=96b34af3-84f8-4882-852f-52a65793b221   /share/tv/completed     ext4     defaults                        0   2
UUID=ebdab491-caf6-4ae7-9672-67ed3bb3d665   /share/tv/documentary   ext4     defaults                        0   2
UUID=cae38214-71f3-4678-bf78-3e836e7a4022   /share/tv/ongoing       ext4     defaults                        0   2

###### Overlay ######
none  /var/www/htdocs/share/tv      overlay   auto,lowerdir=/share/tv/completed/TV\040Shows:/share/tv/documentary/TV\040Shows:/share/tv/ongoing/TV\040Shows   0   0
none  /var/www/htdocs/share/movies  overlay   auto,lowerdir=/share/movies/Movies:/share/movies/Movies-R      0   0

devpts           /dev/pts         devpts      gid=5,mode=620   0   0
proc             /proc            proc        defaults         0   0
tmpfs            /dev/shm         tmpfs       defaults         0   0
 
Old 03-05-2019, 03:56 AM   #8
mrmazda
Senior Member
 
Registered: Aug 2016
Location: USA
Distribution: openSUSE, Debian, Knoppix, Mageia, Fedora, others
Posts: 1,334

Rep: Reputation: 391Reputation: 391Reputation: 391Reputation: 391
Quote:
Originally Posted by bassmadrigal View Post
I personally can't use labels unless I'm willing to use temporary labels as I'm swapping out drives. I much prefer UUIDs for my use case.
You could do as I do and not have that limitation. I do a lot of disk and partition cloning and disk swapping. I don't use use HOME as the label for the filesystem mounted on /home/. I include a partition number and or a piece of the drive's name or serial number. e.g. home09h8s is the label on host big41. sda9 is the partition. h8s is the last three digits of the whole drive's serial number.
 
Old 03-05-2019, 12:18 PM   #9
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 5,874

Rep: Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585Reputation: 3585
Quote:
Originally Posted by mrmazda View Post
You could do as I do and not have that limitation. I do a lot of disk and partition cloning and disk swapping. I don't use use HOME as the label for the filesystem mounted on /home/. I include a partition number and or a piece of the drive's name or serial number. e.g. home09h8s is the label on host big41. sda9 is the partition. h8s is the last three digits of the whole drive's serial number.
Sure, there's various ways to mitigate my issue, but I prefer having my drives labeled a specific way. When I'm moving that data to a newer, bigger harddrive, I still like to use the same label. It makes things look nice and uniform in dolphin's left hand pane. And I don't have my root or home partitions labeled, since there's direct links already in dolphin for those.

EDIT: I should mention... it's not wrong to do labels in the fstab, just as it's not wrong to do UUIDs (or keeping the original device names, assuming they don't change). I am not trying to sway anyone a specific way, just providing information so people can make their own decision. I prefer using UUIDs in my fstab, because I feel it makes my workflow easier. That will not be the case for everyone.

Last edited by bassmadrigal; 03-05-2019 at 12:21 PM.
 
Old 03-05-2019, 03:07 PM   #10
Dunc.
LQ Newbie
 
Registered: Jul 2012
Location: Cumbria UK
Distribution: Slackware
Posts: 15

Rep: Reputation: Disabled
I had a similar problem some time ago. I blamed changing the kernel but I was not sure at the time and even less so now. However my solution was simple.
Code:
mdadm --stop /dev/md127
mdadm --stop /dev/md0
mdadm -A --scan
When it did an auto scan second time around it worked!

Dunc.
 
1 members found this post helpful.
Old 03-05-2019, 06:48 PM   #11
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.2
Posts: 3,237

Rep: Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608Reputation: 1608
If you are using an initrd, make sure that the mdadm.conf that's part of the initrd is either empty (the default file isn't empty) or contains the correct values for your array. Otherwise, udev will create the md device using the high numbers.
 
1 members found this post helpful.
Old 03-06-2019, 06:45 PM   #12
ttk
Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 777
Blog Entries: 26

Rep: Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963
Quote:
Originally Posted by Dunc. View Post
Code:
mdadm --stop /dev/md127
mdadm --stop /dev/md0
mdadm -A --scan
This solution works for me, too.

I'm not sure why it got renamed to md127 in the first place, though, or whether it might get renamed again.

In my case it's not the boot/root filesystem, so I can play silly scripting tricks to mount it as md0 or md127, but knowing the root cause would be nice.
 
Old 03-08-2019, 01:25 PM   #13
majekw
LQ Newbie
 
Registered: May 2011
Distribution: Slackware
Posts: 11

Rep: Reputation: 18
The reason is that since some version of mdadm, it checks 'homehost' of an array during assembly.

Hostname is set in Slackware during 'normal' boot after initramfs finishes, but arrays get assembled by mdadm in initramfs when hostame is still unknown.
So, mdadm in initramfs assembling array have mismatch between hostname (still darkstar) and homehost part written in array metadata. Then it treat such mismatched array as 'foreign' and deny assigning proper device number.

That's why putting mdadm.conf with array definitions into initramfs works (you just force mdadm to assemble array in exact way as specified in config).
And that's why assembling array while system is normally running also works, because there is no more hostname/homehost mismatch.

Probably (I didn't test it) should work also if mdadm.conf have only one line:
Code:
HOMEHOST=yourrealhostname
or
Code:
HOMEHOST=<ignore>

Last edited by majekw; 03-08-2019 at 01:26 PM.
 
3 members found this post helpful.
Old 03-09-2019, 03:10 AM   #14
ttk
Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 777
Blog Entries: 26

Rep: Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963Reputation: 963
Thanks, that gives me a place to start figuring it out. I'm not using an initramfs, so there must be something slightly different going on here.
 
Old 03-09-2019, 06:03 AM   #15
3rensho
Member
 
Registered: Mar 2008
Location: Switzerland
Distribution: Slackware64
Posts: 205

Original Poster
Rep: Reputation: 10
Update:

First of all many thanks to all of you for the wealth of information you provided. After doing some more checking it is looking like there may be a hardware problem. I will try to ferret that out first before creating a raid array again. Again, thank you all for taking the time to respond. Will be back when hw is sorted.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Tor becomes extra secure as .onion becomes Special-Use Domain Name LXer Syndicated Linux News 0 09-14-2015 07:00 PM
Degraded Raid 1, was md0 now md127.. need help jcmorse563 Linux - Server 5 03-25-2014 09:48 AM
Linux raid / md127 pika Linux - Server 2 11-03-2013 12:33 PM
After creating raid6 array, mkfs.jfs /dev/md0, then cannot mount /dev/md0 artagel Debian 4 12-06-2009 08:30 AM
When backspace becomes ^[[D and up arrow becomes ^[[A simonb Linux - Software 1 12-21-2008 01:04 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 10:10 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration