LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 05-22-2008, 06:53 AM   #16
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106

Your mkinitrd.conf looks fine to me.
What you can try is download mkinitrd_command_generator.sh and run it like this on your system:
Code:
mkinitrd_command_generator.sh -c
and compare the output with your previously posted mkinitrd.conf.
Or try
Code:
mkinitrd_command_generator.sh -i -c
if you want to build your mkinitrd.conf file interactively.

Eric
 
Old 05-22-2008, 08:44 AM   #17
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
Wow, that was quick, thanks a lot!

I've run the script and found two lines of its output quite interesting, though I doubt that any of them has to do with my problems in the last two or three days.

Code:
LUKSDEV="/dev/md/1"
/dev/md/1 instead of /dev/md1. With /dev/md/1 I even never got asked for the LUKS passphrase. Altghough there is a device /dev/md/1 in the running system, it's apparently not seen at boot time on a LUKS device.

Code:
MODULE_LIST="pata_acpi:ata_generic:pata_via:mbcache:jbd:ext3"
I modified this to read:
Code:
MODULE_LIST="pata_acpi:ata_generic:pata_via:uhci-hcd:usbhid:mbcache:jbd:ext3"
But when I boot I now get:
Code:
[...]
Using /lib/modules/2.6.24.5-smp/kernel/fs/mbcache.ko
Using /lib/modules/2.6.24.5-smp/kernel/fs/jbd/jbd.ko
Using /lib/modules/2.6.24.5-smp/kernel/fs/ext3/ext3.ko
initrd.gz:  Loading ´de-latin1-nodeadkeys´ keyboard mapping:
md: md0 stopped.
md: bind<hdb1>
md: bind<hda1>
raid1: reaid set md0 active with 2 out of 2 mirrors
mdadm: /dev/md0 has been started with 2 drives.
md: md1 stopped.
md: bind<hdb3>
md: bind<hda3>
raid1: raid set md1 active with 2 out of 2 mirrors
mdadm: /dev/md1 has been started with 2 drives.
  Reading all physical volumes.  This may take a while...
  No volume groups found
  No volume groups found
  No volume groups found
Unlocking LUKS crypt volume ´/dev/cryptvg/root´ on device ´/dev/md1´:
Enter LUKS passphrase:
key slot 0 unlocked.
Command failed: dm_task_set_name: Device /dev/cryptvg/root not found
mount: mounting /dev/mapper//dev/cryptvg/root on /mnt failed: No such file or directory
ERROR:  No /sbin/init found on rootdev (or not mounted).  Trouble ahead.
        You can try to fix it. Type ´exit´ when things are done.

/bin/sh: can´t access tty: job control turned off
/ $ exit
initrd.gz:  exiting
switch_root: bad newroot /mnt
Kernel panic - not syncing: Attempted to kill init!
After all I have tried, I am still stuck at the same point. The question is: Where to go from here?

As I said above, with /dev/md/1 defined as ROOTDEV in /etc/mkinitrd.conf, the output on booting is the same, with just the three lines from "Unlocking LUKS crypt volume" through "key slot 0 unlocked." missing.

What else could I check for?

Thanks for your great patience!

gargamel

Last edited by gargamel; 05-22-2008 at 09:00 AM.
 
Old 05-22-2008, 11:19 AM   #18
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106
ROOTDEV="/dev/cryptvg/root" is the correct notation. As for the other weirdness... still thinking that over.

Eric
 
Old 05-22-2008, 11:59 AM   #19
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
Just another observation from another experiment.

I wanted to find out if my /etc/lilo.conf might be part of the problem, and changed the line
Code:
  root = /dev/cryptvg/root
to
Code:
  root = /dev/mapper/cryptvg-root
Then I ran lilo.
The result were *exactly* the same messages at boot as above, with root = /dev/cryptvg/root.

Not sure this helps investigating the problem, just to let you know...

gargamel
 
Old 05-22-2008, 12:22 PM   #20
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106
There seems to be a flaw in the initrd. The RAID configuration is done too late.
Could you try and apply this patch to your /boot/initrd-tree/init script (assuming you used the default directory for this directory):
Code:
diff -ur /boot/initrd-tree.orig/init /boot/initrd-tree/init
--- /boot/initrd-tree.orig/init 2008-04-03 22:22:23.000000000 +0200
+++ /boot/initrd-tree/init      2008-05-22 19:13:37.000000000 +0200
@@ -112,6 +112,12 @@
 fi

 if [ "$RESCUE" = "" ]; then
+  # Initialize RAID:
+  if [ -x /sbin/mdadm ]; then
+    /sbin/mdadm -E -s >/etc/mdadm.conf
+    /sbin/mdadm -A -s
+  fi
+
   # Make encrypted root partition available:
   # The useable device will be under /dev/mapper/
   # Three scenarios for the commandline exist:
@@ -135,12 +141,6 @@
     fi
   fi

-  # Initialize RAID:
-  if [ -x /sbin/mdadm ]; then
-    /sbin/mdadm -E -s >/etc/mdadm.conf
-    /sbin/mdadm -A -s
-  fi
-
   # Initialize LVM:
   if [ -x /sbin/vgscan ]; then
     /sbin/vgscan --mknodes --ignorelockingfailure
Copy the above patch text to a file, for instance called /root/mkinitrd_init.patch and then do:
Code:
cd /boot/initrd-tree
patch -p2 < /root/mkinitrd_init.patch
mkinitrd
lilo
This should recreate the initrd image and tell lilo to use that.
The patched init script will cause the initrd to first initialize the RAID, then unlock the LUKS volume and then activate the LVM.
Interested to know if this patch fixes your problem. If yes, I wonder if I had already applied this patch when I tested this myself a month ago.

Regards, Eric
 
Old 05-22-2008, 02:26 PM   #21
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
Dear Eric,

you saved my day! That was it!

The patch as such didn't work, but as far as I understand it, all it does is move the chunk of text regarding the RAID initialization from the original place to the line immediately below " if [ "$RESCUE" = "" ]; then". Simple enough to do by hand, and I can boot my system, after entering my passphrase!

This was GREAT SUPPORT, thanks a lot!

For completeness I post the file init.rej, that may help you to find out why the patch failed (in case you want to). When I issued the patch command
Code:
cd /boot/initrd-tree
patch -p2 < /root/mkinitrd_init.patch
I was asked, which file should be patched. I entered "init" in a first attempt and "/boot/initrd-tree/init" in a second try, and got the same init.rej created in /boot/initrd-tree, both times.

Code:
***************
*** 112,117 ****
  fi
  
  if [ "$RESCUE" = "" ]; then
    # Make encrypted root partition available:
    # The useable device will be under /dev/mapper/
    # Three scenarios for the commandline exist:
--- 112,123 ----
  fi
  
  if [ "$RESCUE" = "" ]; then
+   # Initialize RAID:
+   if [ -x /sbin/mdadm ]; then
+     /sbin/mdadm -E -s >/etc/mdadm.conf
+     /sbin/mdadm -A -s
+   fi
+ 
    # Make encrypted root partition available:
    # The useable device will be under /dev/mapper/
    # Three scenarios for the commandline exist:
***************
*** 135,146 ****
      fi
    fi
  
-   # Initialize RAID:
-   if [ -x /sbin/mdadm ]; then
-     /sbin/mdadm -E -s >/etc/mdadm.conf
-     /sbin/mdadm -A -s
-   fi
- 
    # Initialize LVM:
    if [ -x /sbin/vgscan ]; then
      /sbin/vgscan --mknodes --ignorelockingfailure
--- 141,146 ----
      fi
    fi
  
    # Initialize LVM:
    if [ -x /sbin/vgscan ]; then
      /sbin/vgscan --mknodes --ignorelockingfailure
I can't say it often enough: THANKS SO MUCH!

gargamel
 
Old 05-22-2008, 02:45 PM   #22
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106
Cool.
Pat has the patch and will probably "sit on it" for a little while to catch any negatives. Expect to see an updated mkinitrd soon-ish.

Eric
 
Old 05-22-2008, 02:53 PM   #23
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
Excellent!

And BTW: I write this post from the now running brand-new RAID-1 + LVM + LUKS Slackware 12.1 system! 8-)

gargamel
 
Old 05-22-2008, 02:58 PM   #24
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
Hmm, just thought, we could compile the information in this thread to a small HOW-TO... Because, once the patch is applied, most of what was said in the original post is correct. Just let me complete my system setup, first. I'll come back to this, when the machine runs as it is supposed to.

gargamel
 
Old 06-01-2008, 01:27 PM   #25
buck private
LQ Newbie
 
Registered: Jun 2008
Posts: 2

Rep: Reputation: 0
Quote:
Originally Posted by gargamel View Post
Hmm, just thought, we could compile the information in this thread to a small HOW-TO... Because, once the patch is applied, most of what was said in the original post is correct. Just let me complete my system setup, first. I'll come back to this, when the machine runs as it is supposed to.

gargamel
Please do not let this idea die. I find the references to md0 and md1 extremely confusing because as I understand this RAID1 setup there is only md0. Therefore a mini HOWTO that has been tested to work would be a huge help to me. I have a backup HD that should arrive tomorrow and wish to start the random write as soon as the backup is complete. And when I start the random write, my box will be useless until this is set up.

Thanks for your time.
 
Old 06-01-2008, 02:12 PM   #26
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106
Quote:
Originally Posted by buck private View Post
Please do not let this idea die. I find the references to md0 and md1 extremely confusing because as I understand this RAID1 setup there is only md0. Therefore a mini HOWTO that has been tested to work would be a huge help to me. I have a backup HD that should arrive tomorrow and wish to start the random write as soon as the backup is complete. And when I start the random write, my box will be useless until this is set up.

Thanks for your time.
The README_RAID.TXT file does indeed talk about creating only a md0 raid device. But that is only one of the possibilities - no one prevents you from creating multiple raid devices with RAID1.
In fact, the case in this thread is when you need an additional raid device for your /boot partition because the other raid1 device is encrypted (which makes it impossible for lilo to read a kernel from that device when the computer boots).

The mkinitrd patch has gone into Slackware 12.1's patches directory. So, with some fiddling you can install Slackware 12.1 using RAID1 and have a LVM with LUKS encryption inside.

Eric
 
Old 06-02-2008, 02:55 PM   #27
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
Quote:
Originally Posted by buck private View Post
Please do not let this idea die. I find the references to md0 and md1 extremely confusing because as I understand this RAID1 setup there is only md0. Therefore a mini HOWTO that has been tested to work would be a huge help to me. I have a backup HD that should arrive tomorrow and wish to start the random write as soon as the backup is complete. And when I start the random write, my box will be useless until this is set up.

Thanks for your time.
Alien Bob has answered most of your question. And, yes, I'll try to make a mini-HOWTO of this thread, but this week my job schedule is a bit tight, so I'll have to come back in a week or so. With some feedback from the community and some more support of the patient experts here (thanks a lot, once again, Eric!) it may mature over time and perhaps help one or another user.

For now, if you want to set up a RAID + LVM + RAID-1 system, most of what I described in my original post is correct. It just didn't work due to this little snag with RAID initialization being done too late.
This would be a very good first test for the HOWTO, BTW, if you like. But ensure that you use the patched mkinitrd package.

gargamel
 
Old 06-03-2008, 05:37 AM   #28
buck private
LQ Newbie
 
Registered: Jun 2008
Posts: 2

Rep: Reputation: 0
My scenario is different. I have a 4 disk RAID5 and it is my present intention to create 2 partitions on each HD. The first is small, where sda1 is /boot and sd[bcd]1 are swap. sd[abcd]2 would then comprise /dev/md0 and LVM will break that up into / and /var where /var will be mounted with option noatime.

I'll have to think about a separate /home.

So, while the RAID1 described here is essentially correct, it certainly does not cover all possibilities, and Eric's README_*.TXT files fail to explain, so liberal use of "man" will be required. Unfortunately, man pages won't be available because there won't be a working Slackware during the processing... IMO, Eric failed to consider that you have to accomplish this "blind".

If this thread were copied and pasted into a HOWTO, it would be a darn good start. Fleshing it out with a bit of explanation wouldn't hurt either.

Knowing this, I'll print out a couple of man pages before running dd. But I can't help feeling some trepidation.
 
Old 06-03-2008, 07:28 AM   #29
Alien Bob
Slackware Contributor
 
Registered: Sep 2005
Location: Eindhoven, The Netherlands
Distribution: Slackware
Posts: 8,559

Rep: Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106Reputation: 8106
Quote:
Originally Posted by buck private View Post
So, while the RAID1 described here is essentially correct, it certainly does not cover all possibilities, and Eric's README_*.TXT files fail to explain, so liberal use of "man" will be required. Unfortunately, man pages won't be available because there won't be a working Slackware during the processing... IMO, Eric failed to consider that you have to accomplish this "blind".
First off - the README_RAID.TXT is not written by me. Second, that README covers a RAID5 setup in as much detail as RAID1.

Third, using LVM and /or disk encryption is unrelated to your use of RAID. Combining the information in the three READMEs for RAID, LVM and CRYPT, you should have no real problems in putting it all together.

I did not 'fail' to consider that you may have a printer and/or another internet-enabled computer available during install. I also assume that people who want to combine raid, lvm and encryption are knowledgeable enough to find a way - I am not doing any hand-holding here.

Eric
 
Old 08-04-2011, 06:19 AM   #30
Dinithion
Member
 
Registered: Oct 2007
Location: Norway
Distribution: Slackware 14.1
Posts: 446

Rep: Reputation: 59
Quote:
Originally Posted by gargamel View Post
Code:
VFS: Cannot open root device "fd01" or unknown-block(253,1)
Please append a correct "root=" boot option; here are the available partitions:
0300   78150744 hda driver: ide-disk
  0301   2000061 hda1
  0302    128520 hda2
  0303  75874995 hda3
...
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(253,1)
I know it's an old thread, but I got a related problem and a fix and thought it might be useful for future reference.

I got this exact message. I like having my own custom minimal kernel and spend two hours of useless debugging because of this error. It was a pretty dumb error, but worth mentioning. I have a custom .config I use as a template and this didn't include initrd support. So if you get this message or similar and use an initrd and a custom kernel. Check that you actually have initrd support.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Existing LVM filesystem to software RAID 0 (mirroring) hellowe Linux - Server 8 11-25-2009 07:28 AM
Encrypted filesystem and boot from flash Chinook06 Linux - Security 1 01-13-2007 08:59 PM
mount encrypted filesystem fails on boot blackcompany SUSE / openSUSE 0 06-12-2006 09:26 AM
How to boot an encrypted filesystem from removable medium? Vincent_Vega Linux - Security 2 11-19-2004 02:54 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 09:43 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration