Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
By Erik_FL at 2006-10-01 10:46
I had to use dmraid to find and map the RAID sets with the device-mapper in Linux.
Once I got dmraid to work, my problem was how to make a working initrd RAM disk image to boot Slackware Linux. It turned out to be quite a bit more complicated than I expected.
I had to create a new "linuxrc" script to map the RAID volumes and mount the root filesystem. I also had to write a script to include additional files in the initrd RAM disk image.
Here is the new "linuxrc" script.
Code:
# Boot parameters:
# real_root=rootdev Replace "rootdev" with real root device
# root_fs=rootfs Replace "rootfs" with real root filesystem type
# real_init=rl Repalce "rl" with a run level, EX: 4
# single Start in single user mode, I.E. "real_init=-s"
# auto Pass auto flag to init, I.E. "real_init=-a"
ROOTDEV=''
ROOTFS=''
REAL_INIT=''
AUTO_OPTION=''
PATH=/usr/sbin:/usr/bin:/sbin:/bin
parse_opt() {
case "$1" in
*\=*)
echo "$1" | cut -f2 -d=
;;
esac
}
# Remount ram disk read/write
mount -n -t ext2 -o remount,rw /dev/ram0 /
# Mount /proc:
mount -n -t proc proc /proc
# Mount system filesys
mount -n -t sysfs sysfs /sys
# Get default root device and file system
if [ -r /rootdev ]; then
ROOTDEV=`cat /rootdev`
fi
if [ -r /rootfs ]; then
ROOTFS=`cat /rootfs`
fi
# Scan CMDLINE for any specified real_root etc.
CMDLINE=`cat /proc/cmdline`
for x in ${CMDLINE}
do
case "${x}" in
real_root\=*)
ROOTDEV=`parse_opt "${x}"`
;;ROOTDEV=''
R
root_fs\=*)
ROOTFS=`parse_opt "${x}"`
;;
real_init\=*)
REAL_INIT=`parse_opt "${x}"`
;;
auto)
AUTO_OPTION="-a"
;;
single)
REAL_INIT="-s"
;;
*)
;;
esac
done
# Change root filesystem type to a mount parameter
ROOTFS=${ROOTFS:+"-t $ROOTFS"}
# Load kernel modules:
if [ ! -d /lib/modules/`uname -r` ]; then
echo "No kernel modules found for Linux `uname -r`."
elif [ -x ./load_kernel_modules ]; then # use load_kernel_modules script:
echo "/boot/initrd.gz: Loading kernel modules from initrd image:"
. ./load_kernel_modules
else # load modules (if any) in order:
if ls /lib/modules/`uname -r`/*.*o 1> /dev/null 2> /dev/null ; then
echo "/boot/initrd.gz: Loading kernel modules from initrd image:"
for module in /lib/modules/`uname -r`/*.*o ; do
insmod $module
done
unset module
fi
fi
# Initialize LVM:
if [ -x /sbin/vgscan ]; then
vgscan --mknodes
sleep 10
vgchange -ay
fi
# Find any hardware RAID volumes
dmraid -ay
# If /rootdev isn't set, we'll have to trust exiting to work here.
# It's harder to clean up the initrd without a pivot_root,
# so it's a good idea to set rootdev (and rootfs) properly.
if [ "$ROOTDEV" = "" ]; then
exit 0
fi
# Switch to real root partition:
mount -n -o ro $ROOTFS $ROOTDEV /mnt
ERR=$?
if [ ! "$ERR" = "0" ]; then
echo "ERROR: mount returned error code $ERR. Trouble ahead."
exit $ERR
fi
unset ERR
# OK, in case there's no initrd directory:
if [ ! -d /mnt/initrd ]; then
mount -n -o remount,rw $ROOTFS $ROOTDEV /mnt
mkdir -p /mnt/initrd
mount -n -o remount,ro $ROOTFS $ROOTDEV /mnt
fi
umount /sys
umount /proc
cd /mnt
# bye now
echo "/boot/initrd.gz: OK exiting"
pivot_root . initrd
exec <dev/console >dev/console 2>&1
exec /sbin/init ${AUTO_OPTION} ${REAL_INIT}
exit 0
Here is the script that creates the initrd image. It uses the existing Slackware "mkinitrd" to do part of the work.
NOTE: Things you are likely to need to change are in bold.
Code:
LINUXVER="2.6.17" # Linux modules version
CLIBVER="2.3.5" # C library version
ROOTFS="/boot/initrd-tree" # Location of root filesystm
# Get most of the needed programs form the normal mkinitrd
mkinitrd -k $LINUXVER -c -r /dev/mapper/pdc_bbbffffihj3 -f ext3
# Create directories
for dir in \
"bin" "dev" "etc" "mnt" "proc" "sbin" "sys" "usr" \
"tmp" "var" "var/lock" "var/log" "var/run" "var/tmp"
do
if [ ! -d "$ROOTFS/$dir" ] ; then
mkdir -p "$ROOTFS/$dir"
fi
done
# Create devices
pushd "$ROOTFS/dev" > /dev/null
# Remove existing devices
rm -Rf *
# Required devices
mknod -m u=rw,g=,o= console c 5 1
chown root:tty console
mknod -m u=rw,g=rw,o= ram0 b 1 0
chown root:disk ram0
mknod -m u=rw,g=r,o= mem c 1 1
chown root:kmem mem
mknod -m u=rw,g=r,o= kmem c 1 2
chown root:kmem kmem
mknod -m u=rw,g=rw,o=rw null c 1 3
chown root:root null
mknod -m u=rw,g=rw,o=rw zero c 1 5
chown root:root zero
mkdir vc
chmod u=rwx,g=rx,o=rx vc
chown root:root vc
mknod -m u=rw,g=rw,o= vc/1 c 4 1
chown root:tty vc/1
ln -s vc/1 tty1
# IDE Disks (up to 20) max 64 partitions per disk
drives=4
partitions=9
if [ $drives -gt 0 ] ; then
majors=( 3 22 33 34 56 57 88 89 90 91)
for drv in `seq 0 $(($drives-1))` ; do
dev="abcdefghijklmnopqrst"
dev=hd${dev:$drv:1}
major=${majors[$(($drv/2))]}
minor=$(( ($drv%2) * 64 ))
mknod -m u=rw,g=rw,o= $dev b $major $minor
chown root:disk $dev
if [ $partitions -gt 0 ] ; then
for i in `seq 1 $partitions` ; do
mknod -m u=rw,g=rw,o= $dev$i b $major $(($minor+$i))
chown root:disk $dev$i
done
fi
done
fi
# SCSI Disks (0 to 127) max 16 partitions per disk
drives=4
partitions=9
if [ $drives -gt 0 ] ; then
majors=( 8 65 66 67 68 69 70 71)
for drv in `seq 0 $(($drives-1))` ; do
dev="abcdefghijklmnopqrstuvwxyz"
if [ $drv -lt 26 ] ; then
dev=sd${dev:$drv:1}
else
dev=sd${dev:$(($drv/26-1)):1}${dev:$(($drv%26)):1}
fi
major=${majors[$(($drv/16))]}
minor=$(( ($drv%16) * 16 ))
mknod -m u=rw,g=rw,o= $dev b $major $minor
chown root:disk $dev
if [ $partitions -gt 0 ] ; then
for i in `seq 1 $partitions` ; do
mknod -m u=rw,g=rw,o= $dev$i b $major $(($minor+$i))
chown root:disk $dev$i
done
fi
done
fi
# Floppy disks A and B
for i in `seq 0 1` ; do
mknod -m u=rw,g=rw,o= fd$i b 2 $i
chown root:floppy fd$i
done
# Device mapper for "dmraid"
mkdir mapper
chmod u=rwx,g=rx,o=rx mapper
chown root:root mapper
mknod -m u=rw,g=rw,o= mapper/control c 10 63
chown root:root mapper/control
# Done with devices
popd > /dev/null
# Copy scripts and programs
cp -p linuxrc "$ROOTFS"
chmod u=rwx,g=rx,o=rx "$ROOTFS/linuxrc"
cp -p /sbin/dmraid "$ROOTFS/sbin"
cp -p /bin/cut "$ROOTFS/bin"
for lib in \
"libdevmapper.so" "libdevmapper.so.1.01" \
"libc.so.6" "ld-linux.so.2" \
"ld-$CLIBVER.so" "libc-$CLIBVER.so"
do
if [ -e "/lib/$lib" ] ; then
cp -Pp "/lib/$lib" "$ROOTFS/lib/$lib"
else
echo "Library file not found \"/lib/$lib\""
exit 1
fi
done
# Make the compressed image file
mkinitrd
I had no luck at all getting "lilo" to work with a RAID device created by dmraid. I was able to make "grub" work, but I had to do a "native" installation. To do a "native" installation you create a bootable floppy or bootable CD. After booting the floppy or CD, you can use "grub" to install itself to the RAID array. It makes calls to the BIOS, which is why it can correctly access the RAID array. I was not able to install "grub" from Linux, though there may be a patch for that. I didn't bother looking since "grub" only has to be installed once (unlike lilo).
If you change the "grub" boot menu file "/boot/grub/menu.lst" it is not necessary to re-install "grub". Also, if you make a copy of the Linux/grub boot block for Windows XP to use, you don't have to update that file when you change boot entries (unlike lilo).
Here is an example boot menu using grub and the initrd image.
NOTE: Indented text is a wrapped line, not a separate line.
Code:
default 0
timeout 30
title Linux
root (hd0,2)
kernel /boot/vmlinuz vga=773 auto root=/dev/ram0 load_ramdisk=1 ramdisk_size=4096
real_root=/dev/mapper/pdc_bbbffffihj3 root_fs=ext3 init=/linuxrc
initrd /boot/initrd.gz
title Linux Single User Mode
root (hd0,2)
kernel /boot/vmlinuz single root=/dev/ram0 load_ramdisk=1 ramdisk_size=4096
real_root=/dev/mapper/pdc_bbbffffihj3 root_fs=ext3 init=/linuxrc
initrd /boot/initrd.gz
title Linux Old Version
root (hd0,2)
kernel /boot/vmlinuz.old single root=/dev/ram0 load_ramdisk=1 ramdisk_size=4096
real_root=/dev/mapper/pdc_bbbffffihj3 root_fs=ext3 init=/linuxrc
initrd /boot/initrd.old.gz
title Linux Known Good
root (hd0,2)
kernel /boot/vmlinuz.kgd single root=/dev/ram0 load_ramdisk=1 ramdisk_size=4096
real_root=/dev/mapper/pdc_bbbffffihj3 root_fs=ext3 init=/linuxrc
initrd /boot/initrd.kgd.gz
title Windows XP Pro
rootnoverify (hd0,0)
chainloader +1
The two key points when using the "initrd" image are these. You must specify the kernel parameter "root=/dev/ram0" since Linux cannot identify the RAID device even to tell that it is not the same as the boot device (RAM disk). You must specify the kernel parameter "init=/linuxrc" to make Linux execute the script even though the boot device (/dev/ram0) is the same as the root device (/dev/ram0). Normally when those two devices are the same, Linux does not run any "linuxrc" script.
You need the "load_ramdisk=1" to load the intird image, and you will probably want "ramdisk_size=" to specify the size of the RAM disk needed. All of the initrd images that I've built (without modules) are around 3 megabytes.
The script takes some additional parameters. You can override the root device given to the "mkinitrd" command by including the "real_root=" option. You can override the root filesystem type given to "mkinitrd" by including "root_fs=". You can specify a run level for "init" by including the "real_init=" option followed by a number. The "single" and "auto" options behave in the same way as normal, specifying single-user mode and indicating an automatic boot was done.
Finally, there are some messy details to deal with in the Linux init script, "rc.S". Unfortunately when "udev" is started by the script, it will recreate the entire list of devices. That is guaranteed to wipe out the contents of "/dev/mapper" even if you happen to copy them from the RAM disk to the root filesystem. It is necessary to leave the RAM disk mounted, and then copy the RAID device names after "udev" is done. That also means some of the "rc.S" script will have to refer to devices as "/initrd/dev/mapper/devicename" rather than "/dev/mapper/devicename". Here are the key places that must be changed in "rc.S".
Code:
# Initialize udev to manage /dev entries for 2.6.x kernels:
if [ -x /etc/rc.d/rc.udev ]; then
if ! grep -w nohotplug /proc/cmdline 1> /dev/null 2> /dev/null ; then
/etc/rc.d/rc.udev
fi
fi
# Enable swapping:
# NOTE: Have to wait until dmraid devices are found!
#/sbin/swapon -a
Code:
# Check the root filesystem:
if [ ! $READWRITE = yes ]; then
RETVAL=0
if [ ! -r /etc/fastboot ]; then
echo "Checking root filesystem:"
/sbin/fsck $FORCEFSCK -C -a /initrd/dev/mapper/pdc_bbbffffihj3
RETVAL=$?
fi
Add the following code right after the end of the filesystem checking. I included the last line of that "if" statement. The new code is between the "fi # Done checking..." line and the "# Any /etc/mtab that exists..." line.
Code:
fi # Done checking root filesystem
# Copy device mapper devices from initrd
for file in /initrd/dev/mapper/*; do
short=`basename $file`
if [ "$short" != 'control' ]; then
rm -rf /dev/mapper/$short
cp -dpR $file /dev/mapper
fi
done
# Might have found another swap device
/sbin/swapon -a
# Any /etc/mtab that exists here is old, so we delete it to start over:
/bin/rm -f /etc/mtab*
# Remounting the / partition will initialize the new /etc/mtab:
/sbin/mount -w -o remount /
Here I used the base RAID array device, as opposed to the individual isw_cafaadgife_RAID01, isw_cafaadgife_RAID02, and isw_cafaadgife_RAID03, which correspond to RAID partitions (hd0,0), (hd0,1), and (hd0,2) respectively.
Running grub with this device map, everything works out nicely:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
You said you had to do a "native" GRUB installation from BIOS since GRUB couldn't recognize your RAID array in linux.
You could get a linux GRUB installation working by supplying a custom device map:
Running grub with this device map, everything works out nicely: