LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices

Reply
 
Search this Thread
Old 12-01-2013, 01:03 AM   #31
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42

Quote:
Originally Posted by Richard Cranium View Post
Well, on the non-working system, try manually running the commands that the init script does...
Code:
  if [ -x /sbin/mdadm ]; then
    /sbin/mdadm -E -s >/etc/mdadm.conf
    /sbin/mdadm -S -s
    /sbin/mdadm -A -s
    # This seems to make the kernel see partitions more reliably:
    fdisk -l /dev/md* 1> /dev/null 2> /dev/null
  fi
Yep, I saw that code when I was analyzing the init scripts. That's part of where I got the idea of running the code I posted previously. Doing this, I was able to get it up and running. But it still will not come up on its own. It's miserable having to start the server with cryptic, manual intervention, but at least I was able to get it up. I just can't reboot it unless I'm there to manually bring up the raid devices, the logical volumes and mount the root volume.

I will continue to trouble shoot it, but I'm completely baffled as to why it won't come up on its own. As a drastic measure, I may try rebuilding everything from scratch. I hate to do this because it's so time consuming, but that may be the only way to get it out of this weird state it seems to be stuck in.
 
Old 12-01-2013, 01:12 AM   #32
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42
Quote:
Originally Posted by Richard Cranium View Post
I'll add that under Slackware 14.1, my raid devices are ignoring their old names and are initializing as /dev/md125, /dev/md126, and /dev/md127. LVM comes up OK, nonetheless.
I remember you mentioning this before. Neither of my systems, that is the working one and the one that requires manual intervention at start up, are initializing with /dev/md125, /dev/md126, and /dev/md127. Mine are both doing the right things. The right things being, /dev/md0, and /dev/md1 in my case.
 
Old 12-01-2013, 04:11 AM   #33
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,574

Rep: Reputation: 463Reputation: 463Reputation: 463Reputation: 463Reputation: 463
Quote:
Originally Posted by meetscott View Post
I remember you mentioning this before. Neither of my systems, that is the working one and the one that requires manual intervention at start up, are initializing with /dev/md125, /dev/md126, and /dev/md127. Mine are both doing the right things. The right things being, /dev/md0, and /dev/md1 in my case.
Well, in Slackware 14.0 after the 3.2.45 kernel upgrade, my arrays would initialize as the high numbers but would reset themselves to the names that I had used to create them.

In Slackware 14.1, the same arrays initialize as the high numbers but never change their names to the names I had used to create them.
 
Old 12-01-2013, 04:41 AM   #34
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,574

Rep: Reputation: 463Reputation: 463Reputation: 463Reputation: 463Reputation: 463
Try (as root)...
Code:
rm /boot/initrd-tree/etc/mdadm.conf
mkinitrd -o /boot/initrd-test.gz
...then add a stanza to lilo.conf to use initrd-test.gz instead of initrd.gz.

Why?

Well, on my machine, the initial tree created by /usr/share/mkinitrd/mkinitrd_command_generator.sh copies over the default /etc/mdadm.conf file into the initrd-tree. The default mdadm.conf contains only comments, but the init code in initrd contains...
Code:
  if [ -x /sbin/mdadm ]; then
    # If /etc/mdadm.conf is present, udev should DTRT on its own;
    # If not, we'll make one and go from there:
    if [ ! -r /etc/mdadm.conf ]; then
      /sbin/mdadm -E -s >/etc/mdadm.conf
      /sbin/mdadm -S -s
      /sbin/mdadm -A -s
      # This seems to make the kernel see partitions more reliably:
      fdisk -l /dev/md* 1> /dev/null 2> /dev/null
    fi
  fi
...which meant that the only raid assembly that happens is whatever the kernel auto-assembles for you. I tried this on my machine, and now I've got my proper raid array names back.
 
Old 12-01-2013, 10:29 PM   #35
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42
Quote:
Originally Posted by Richard Cranium View Post
Try (as root)...
Code:
rm /boot/initrd-tree/etc/mdadm.conf
mkinitrd -o /boot/initrd-test.gz
...then add a stanza to lilo.conf to use initrd-test.gz instead of initrd.gz.

Why?

Well, on my machine, the initial tree created by /usr/share/mkinitrd/mkinitrd_command_generator.sh copies over the default /etc/mdadm.conf file into the initrd-tree. The default mdadm.conf contains only comments, but the init code in initrd contains...
Code:
  if [ -x /sbin/mdadm ]; then
    # If /etc/mdadm.conf is present, udev should DTRT on its own;
    # If not, we'll make one and go from there:
    if [ ! -r /etc/mdadm.conf ]; then
      /sbin/mdadm -E -s >/etc/mdadm.conf
      /sbin/mdadm -S -s
      /sbin/mdadm -A -s
      # This seems to make the kernel see partitions more reliably:
      fdisk -l /dev/md* 1> /dev/null 2> /dev/null
    fi
  fi
...which meant that the only raid assembly that happens is whatever the kernel auto-assembles for you. I tried this on my machine, and now I've got my proper raid array names back.
I tried your suggestion. It was a good one, but it still didn't work. By commenting out the /etc/mdadm.conf in the initrd-tree it did force the kernel to pick up new names. So it fired up md126 and md127. After this it still panics out because it can't load the LVM with root.

I'm almost to the point of modifying init so it does what I want it to do.
 
Old 12-02-2013, 12:29 AM   #36
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,574

Rep: Reputation: 463Reputation: 463Reputation: 463Reputation: 463Reputation: 463
When you say "commenting out the /etc/mdadm.conf in the initrd-tree", did you mean "remove the file etc/mdadm.conf" from the initrd-tree?

Hmm. When you get this broken system running, what is the output of
Code:
pvs -v
when run as root? If the raid arrays are running (even with screwed up names), the lvm tools should find the physical volume information from the UUIDs in the metadata.
 
Old 12-02-2013, 01:52 AM   #37
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42
Quote:
Originally Posted by Richard Cranium View Post
When you say "commenting out the /etc/mdadm.conf in the initrd-tree", did you mean "remove the file etc/mdadm.conf" from the initrd-tree?

Hmm. When you get this broken system running, what is the output of
Code:
pvs -v
when run as root? If the raid arrays are running (even with screwed up names), the lvm tools should find the physical volume information from the UUIDs in the metadata.
I left the mdadm.conf file there with everything commented out inside it.

Output of pvs -v:
Code:
    Scanning for physical volume names
  PV         VG   Fmt  Attr PSize   PFree DevSize PV UUID                               
  /dev/md0   vg1  lvm2 a--  508.00m    0  509.75m 5xHv5N-0Jv3-CU67-C6Iz-KErk-QLdr-65bnlJ
  /dev/md1   vg2  lvm2 a--    1.82t    0    1.82t DLfHqe-gfvV-m3H5-qlzm-sK9k-C4bB-LbyHvL
 
Old 12-02-2013, 02:05 AM   #38
mlslk31
Member
 
Registered: Mar 2013
Location: Florida, USA
Distribution: Slackware, FreeBSD
Posts: 168

Rep: Reputation: 62
I'm too much of a n00b to be in this conversation (and don't use initrd or LVM), but I might throw this in here. One of my lowly setups is a 2-disc setup that has a plain JFS-formatted /boot partition. Therefore, the kernels are loaded from the plain partition, the kernel auto-detects my RAID-0 /dev/md0, and everything else is read off of /dev/md0. I have something in my kernel cmdline like "md=0,/dev/sda1,/dev/sdb1 root=/dev/md0" or something to that effect. I know no better, so I went completely off of a document like Documentation/md.txt, but from my particular kernel source.

What it seems like is that for the non-LVM md partitions I use, the partition type should be fd00 if you want them autodetected. Despite the mention that autodetection is for DOS/MBR-style partitions only, they work with GPT partitions as well. If you don't want them autodetected, don't mark them as fd00, and let mdadm take care of it.

As for the numbers, that took some jiggling. For v0 metadata, you can somehow assemble the raid as "0" and pass the flag --update=super-minor to mdadm so that the preferred-minor defaults to 0. This trick does not work with v1 arrays. To see the current preferred minor, use `mdadm --detail assembled_raid`. I've forgotten what I did to get the v1 arrays to budge the minor. I either assembled or re-built them as "18" and "19", respectively, instead of their normal names, then have it set up like this:

ARRAY pretty_name_for_dev_md UUID=1337:f00:ba4:babab0033

Again, I'm learning this for only the second time (first time wasn't much fun) on Linux, and I'm doing this for new installs that I could reinstall or restore from backup. I also didn't get the feeling that the --auto=md{x} flag worked all the time. Zero confidence, but I got my particular setup up and running. YMMV.
 
Old 12-02-2013, 03:17 AM   #39
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,574

Rep: Reputation: 463Reputation: 463Reputation: 463Reputation: 463Reputation: 463
Quote:
Originally Posted by meetscott View Post
I left the mdadm.conf file there with everything commented out inside it.

Output of pvs -v:
Code:
    Scanning for physical volume names
  PV         VG   Fmt  Attr PSize   PFree DevSize PV UUID                               
  /dev/md0   vg1  lvm2 a--  508.00m    0  509.75m 5xHv5N-0Jv3-CU67-C6Iz-KErk-QLdr-65bnlJ
  /dev/md1   vg2  lvm2 a--    1.82t    0    1.82t DLfHqe-gfvV-m3H5-qlzm-sK9k-C4bB-LbyHvL
Ok. I thought that you might have had some PVs that had been around a while and were in lvm1 format.

Please try with either removing or renaming the mdadm.conf file in the initrd. That should force the init script to run the commands...
Code:
      /sbin/mdadm -E -s >/etc/mdadm.conf
      /sbin/mdadm -S -s
      /sbin/mdadm -A -s
      # This seems to make the kernel see partitions more reliably:
      fdisk -l /dev/md* 1> /dev/null 2> /dev/null
...which should result in what you want.
 
1 members found this post helpful.
Old 12-07-2013, 12:45 PM   #40
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42
Quote:
Originally Posted by mlslk31 View Post
I'm too much of a n00b to be in this conversation (and don't use initrd or LVM), but I might throw this in here.
I've been really busy this last week. So I haven't looked at this stuff again. But I appreciate the input and ideas. You may consider yourself a n00b, but you might not be giving yourself enough credit ;-)

Thanks!
scott
 
Old 12-07-2013, 12:47 PM   #41
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42
Quote:
Originally Posted by Richard Cranium View Post
Ok. I thought that you might have had some PVs that had been around a while and were in lvm1 format.

Please try with either removing or renaming the mdadm.conf file in the initrd. That should force the init script to run the commands...
Code:
      /sbin/mdadm -E -s >/etc/mdadm.conf
      /sbin/mdadm -S -s
      /sbin/mdadm -A -s
      # This seems to make the kernel see partitions more reliably:
      fdisk -l /dev/md* 1> /dev/null 2> /dev/null
...which should result in what you want.
I'll give it a shot and let you know. Thanks!
scott
 
Old 12-07-2013, 04:30 PM   #42
meetscott
Samhain Slackbuild Maintainer
 
Registered: Sep 2004
Location: Phoenix, AZ, USA
Distribution: Slackware
Posts: 411

Original Poster
Rep: Reputation: 42
This is a miracle. I renamed mdadm.conf to mdadm.conf.bak in the initrd-tree. Then I re-ran mkinitrd with no parameters. Re-ran lilo. Then I rebooted.

Walla! Everything came up. This is the first time I haven't had to manually intervene during boot time to get this machine to come up since my upgrade to Slackware 14.1 from Slackware 14.0 using the 3.2.29 kernel.

Richard Cranium, you are the man! Thanks!
 
Old 05-01-2014, 11:48 AM   #43
rmathes
LQ Newbie
 
Registered: May 2014
Distribution: Slackware 14.1
Posts: 6

Rep: Reputation: Disabled
Thank you all for this valuable information it helped me fix my problem as well. I just removed the mdadm.conf and rebuilt the initrd.gz and bam I was up and running on my new raid setup and it is smoking fast!
 
Old 05-02-2014, 08:50 PM   #44
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,574

Rep: Reputation: 463Reputation: 463Reputation: 463Reputation: 463Reputation: 463
I'll just mention this bit:

The safest way to ensure that your software RAID arrays are setup correctly would be to run the command
Code:
/sbin/mdadm -E -s >/etc/mdadm.conf
as root prior to running the mkinitrd command (or the outstanding command generator script). udevd will use that information, if present, to correctly auto-assemble your arrays in the first place. Otherwise, udevd will create them with the wrong names and the initrd init script will re-assemble them. That slows your boot down a slight bit.

So, the two approaches have pros and cons:
  • Remove /etc/mdadm.conf from your initrd:
    • Pro: You will never have to re-generate your initrd if you add a new RAID array or delete/change an existing one.
    • Con: You will assemble the arrays twice during the boot process, slowing it down slightly.
  • Ensure that your /etc/mdadm.conf in the initrd is correct:
    • Pro: You will only assemble your arrays once during boot, speeding things up some amount.
    • Con: You have to remember to re-create your initrd as you add/delete/change your RAID configuration.
 
1 members found this post helpful.
  


Reply

Tags
3.2.45 slackware raid


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID 6 Issues after upgrade to 12.04... and also a fresh install. :( Pyro666 Linux - Newbie 1 05-05-2012 05:40 PM
Dell Latitude D610 Kernel Upgrade Issues [Slackware 12] Chryzmo Slackware 4 05-01-2008 03:58 PM
Slackware-Current: Qt upgrade issues Neruocomp Slackware 1 04-03-2005 01:24 PM
HPT370 RAID 0 and kernel upgrade brianweber4 Linux - Newbie 2 09-09-2004 09:32 PM
RAID - Linux kernel upgrade 2.2.16 to 2.4.20 dazo Linux - General 0 08-14-2003 11:38 AM


All times are GMT -5. The time now is 08:48 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration