LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (http://www.linuxquestions.org/questions/slackware-14/)
-   -   Software RAID 1 vs. LILO vs. kernel-generic (http://www.linuxquestions.org/questions/slackware-14/software-raid-1-vs-lilo-vs-kernel-generic-889337/)

kikinovak 07-01-2011 03:17 AM

Software RAID 1 vs. LILO vs. kernel-generic
 
Hi,

I'm currently trying to setup Slackware 13.7 on a server, using software RAID 1. I'm using the README_RAID.TXT document at the root of the Slackware disc as a reference. Anyway, here's what I have so far.

/dev/md1 -> /boot partition
/dev/md2 -> swap partition
/dev/md3 -> / partition

Code:

[root@raymonde:~] # fdisk -l /dev/sd{a,b}

Disk /dev/sda: 41.1 GB, 41110142976 bytes
255 heads, 63 sectors/track, 4998 cylinders, total 80293248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x573fb416

  Device Boot      Start        End      Blocks  Id  System
/dev/sda1              63      192779      96358+  fd  Linux raid autodetect
/dev/sda2          192780    2152709      979965  fd  Linux raid autodetect
/dev/sda3        2152710    80293247    39070269  fd  Linux raid autodetect

Disk /dev/sdb: 41.1 GB, 41110142976 bytes
255 heads, 63 sectors/track, 4998 cylinders, total 80293248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1ecbbdc6

  Device Boot      Start        End      Blocks  Id  System
/dev/sdb1              63      192779      96358+  fd  Linux raid autodetect
/dev/sdb2          192780    2152709      979965  fd  Linux raid autodetect
/dev/sdb3        2152710    80293247    39070269  fd  Linux raid autodetect

[root@raymonde:~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb1[1] sda1[0]
      96256 blocks [2/2] [UU]
     
md2 : active raid1 sdb2[1] sda2[0]
      979840 blocks [2/2] [UU]
     
md3 : active raid1 sdb3[1] sda3[0]
      39070144 blocks [2/2] [UU]
     
unused devices: <none>

[root@raymonde:~] # cat /etc/lilo.conf
# LILO configuration file
# generated by 'liloconfig'
#
# Start LILO global section
# Append any additional kernel parameters:
append="nomodeset quiet vt.default_utf8=0"
boot = /dev/md1
lba32
raid-extra-boot = mbr-only
<snip>
# End LILO global section
# Linux bootable partition config begins
image = /boot/vmlinuz
  root = /dev/md3
  label = Linux
  read-only
image = /boot/vmlinuz-generic-smp-2.6.37.6-smp
  initrd = /boot/initrd.gz
  root = /dev/md3
  label = Linuxgeneric
  read-only
# Linux bootable partition config ends

[root@raymonde:~] # cat /etc/mkinitrd.conf
SOURCE_TREE="/boot/initrd-tree"
CLEAR_TREE="1"
OUTPUT_IMAGE="/boot/initrd.gz"
KERNEL_VERSION="$(uname -r)"
KEYMAP="fr_CH-latin1"
MODULE_LIST="ext4"
ROOTDEV="/dev/md3"
ROOTFS="ext4"
RESUMEDEV="/dev/md2"
RAID="1"
LVM="0"
UDEV="1"
MODCONF="0"
WAIT="1"

I created an initrd image using mkinitrd -F, added an according stanza to /etc/lilo.conf and ran 'lilo' after that.

Now I can boot on the vanilla huge kernel all right. But I can't seem to boot on the generic kernel. Whenever I try to do this, the boot process stops short on the following error message:

Code:

mount: mounting /dev/md3 on /mnt failed: Device or resource busy
ERROR: no /sbin/init found on rootdev
...
/ #

Now what did I forget here? I admit I'm clueless.

mRgOBLIN 07-01-2011 04:29 AM

I would say you are missing the module for your drive controller.

Run /usr/share/mkinitrd/mkinitrd_command_generator.sh and see what modules it lists

Code:

/usr/share/mkinitrd/mkinitrd_command_generator.sh -c
will even output a mkinitrd.conf for you

kikinovak 07-01-2011 06:08 AM

No, that's not the problem. But I have another problem pertaining to that. I *think* there are some remains from a previous install that may cause the problem. So here's my question.

Even when I completely nuke my partition tables (e. g. dd if=/dev/zero of=/dev/sda bs=512 count=64, same thing with /dev/sdb), these /dev/mdX will always come back randomly, to my exasperation.

How can I nuke my paritition tables and definitely get rid of any /dev/mdX ? The problem seems to exist with any Linux installer. There seems no way to just get rid of RAID arrays once and for all. They always seem to come back.

mRgOBLIN 07-01-2011 06:28 AM

I think (not 100% sure) that you would need to make sure that your drive controller in the BIOS is not set in RAID mode.
You can use mdadm's --zero-superblock --force (just to be sure)

Before creating arrays do a
Code:

mdadm -Ss
to stop the md12X arrays that seem to appear.

Make sure that you use --metadata=0.90 when creating the raid arrays with mdadm too.

This has all come about with the new mdadm version supporting bios level raid.

kikinovak 07-03-2011 03:06 AM

OK, I experimented some more, and I found the solution. The culprit was some /dev/mdX remnant from a previous installation. What I found out is that if you want to make an installation using software RAID on disks that had a previous RAID configuration, you have to do this:

1) stop all RAID arrays, as suggested above:

Code:

# mdadm -Ss
2) partition disks using fdisk or cfdisk

3) run the following on every single partition, for example:

Code:

# mdadm --zero-superblock /dev/sda1
# mdadm --zero-superblock /dev/sda2
# mdadm --zero-superblock /dev/sda3
# mdadm --zero-superblock /dev/sdb1
# mdadm --zero-superblock /dev/sdb2
# mdadm --zero-superblock /dev/sdb3

4) Then only create the new disk arrays (mdadm --create etc.)

Thanks for your help, mRgOBLIN !


All times are GMT -5. The time now is 09:08 AM.