LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 08-27-2009, 04:55 PM   #1
Ja5
LQ Newbie
 
Registered: Aug 2009
Location: Texas
Distribution: Slackware
Posts: 5

Rep: Reputation: 0
Installation on P6T6 WS Revolution SAS Drives


I've done some googling on this issue and haven't found much. I have 2 Seagate Cheetah 300GB SAS drives in RAID 0. They are split into 3 partitions which have Windows Vista 64, Microsoft Hyper-V Server 2008 R2, and hopefully Slackware 12.2. The RAID group is set up and functioning as I have loaded Vista and Microsoft Hyper-V Server 2008 R2. When I enter the fdisk section (after loading the default kernel) of the setup I can edit /dev/sda - sdf which, tells me that it can see the drives and partitions but it doesn't recognize that they are in RAID 0. On the disk that came with the mobo I have redhat drivers but I am not sure if they will help me or how to attempt to load them. Anyone have any ideas?
 
Old 08-28-2009, 12:07 AM   #2
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
The most likely culprit is that you are not running the module to properly see your drives. What kernel are you loading when booting off the disc?? Make sure you are running the huge (although I can't remember the exact command to invoke it).
 
Old 08-28-2009, 01:11 PM   #3
Ja5
LQ Newbie
 
Registered: Aug 2009
Location: Texas
Distribution: Slackware
Posts: 5

Original Poster
Rep: Reputation: 0
I have tried the default hugesmp.s and huge.s kernals. Today, I fired up cfdisk on /dev/sda and /dev/sdb. It did not recognize any partitions just a 300GB drive.
 
Old 08-28-2009, 01:51 PM   #4
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Try adding this to the boot parameters

Code:
raid-extra-boot = "/dev/sda, /dev/sdb"
 
Old 08-29-2009, 12:08 PM   #5
Ja5
LQ Newbie
 
Registered: Aug 2009
Location: Texas
Distribution: Slackware
Posts: 5

Original Poster
Rep: Reputation: 0
I'm still a n00b... I don't know how to add boot parameters
 
Old 08-29-2009, 04:17 PM   #6
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
When it first boots and shows the screen after your bios (where most people hit enter), try typing that in and hit enter, then it will load all the slack stuff and hopefully your raid drives with it.

EDIT: Although it has been a while since I have booted the install disc (haven't had time to install 13.0 yet), so I don't remember exactly what the screen looks like or what it says.

Last edited by bassmadrigal; 08-29-2009 at 04:19 PM.
 
Old 08-29-2009, 04:51 PM   #7
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by Ja5 View Post
I've done some googling on this issue and haven't found much. I have 2 Seagate Cheetah 300GB SAS drives in RAID 0. They are split into 3 partitions which have Windows Vista 64, Microsoft Hyper-V Server 2008 R2, and hopefully Slackware 12.2. The RAID group is set up and functioning as I have loaded Vista and Microsoft Hyper-V Server 2008 R2. When I enter the fdisk section (after loading the default kernel) of the setup I can edit /dev/sda - sdf which, tells me that it can see the drives and partitions but it doesn't recognize that they are in RAID 0. On the disk that came with the mobo I have redhat drivers but I am not sure if they will help me or how to attempt to load them. Anyone have any ideas?
When you say RAID 0 do you mean Intel Matrix Storage Manager RAID or the Marvell Serial Attached SCSI RAID? Those are both "fake hardware RAID" and not compatible with the standard Slackware Setup disc.

I have the P6T Deluxe and did manage to get Slackware working with an Intel Matrix Storage Manager RAID 0 array. To do that I had to use a program called "dmraid" that can detect the RAID 0 metadata and then configure the Linux device mapper to access the array.

I recommend that you take a look at some of my previous responses to people asking about fake hardware RAID.

http://www.linuxquestions.org/questi...e-raid-727299/

http://www.linuxquestions.org/questi...k-12.0-586440/

http://www.linuxquestions.org/questi...e-12.2-728419/

I'm about to put Slackware 13 on my RAID array so I should have updated scripts in a few days. If you want to see what I did on Slackware 12.2 look here.

http://personalpages.bellsouth.net/e/r/erikfl/raid/

The boot CD is for Slackware 12.2 and is compatible with the P6T. It has a copy of "dmraid" on it. After booting, log in as root with no password. Then you can use this command to detect the RAID arrays.

dmraid -ay

To look at the detected arrays do this.

ls -l /dev/mapper

Then you can mount RAID arrays using the names you find in "/dev/mapper".

mount /dev/mapper/longnameofarray /mnt

Replace "longnameofarray" with the actual name for the partition in "/dev/mapper". The number at the end of each name is a partition number.

Last edited by Erik_FL; 08-29-2009 at 05:03 PM.
 
Old 08-29-2009, 07:08 PM   #8
mRgOBLIN
Slackware Contributor
 
Registered: Jun 2002
Location: New Zealand
Distribution: Slackware
Posts: 999

Rep: Reputation: 231Reputation: 231Reputation: 231
I do believe that mdadm now has support for the Intel ICHXR RAID *cough* controllers.
 
Old 08-29-2009, 09:53 PM   #9
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by mRgOBLIN View Post
I do believe that mdadm now has support for the Intel ICHXR RAID *cough* controllers.
How would one go about installing Slackware on an ICH10 RAID0 array? Whenever I've tried that with a Slackware setup CD it either would not detect the array at all, or would not find the partitions during SETUP.

If there's a way to use mdadm I'd rather do that than use "dmraid".
 
Old 08-29-2009, 10:08 PM   #10
mRgOBLIN
Slackware Contributor
 
Registered: Jun 2002
Location: New Zealand
Distribution: Slackware
Posts: 999

Rep: Reputation: 231Reputation: 231Reputation: 231
This only pertains to version 3.x of mdadm.

I haven't tried it myself but according to the announcement you can use the metadata from BIOS level RAID

http://www.kernel.org/pub/linux/util...mdadm/ANNOUNCE
 
Old 08-30-2009, 12:01 PM   #11
Ja5
LQ Newbie
 
Registered: Aug 2009
Location: Texas
Distribution: Slackware
Posts: 5

Original Poster
Rep: Reputation: 0
Thanks for the replies all. Erik, in response to your question I am on the Marvell controller. I'll dig through your posts a little later this afternoon and give a shot tomorrow night. Also, I'd be interested to hear how it goes with ver 13.0 as that is where I'll most likely be heading next
 
Old 08-31-2009, 12:05 PM   #12
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by Ja5 View Post
Thanks for the replies all. Erik, in response to your question I am on the Marvell controller. I'll dig through your posts a little later this afternoon and give a shot tomorrow night. Also, I'd be interested to hear how it goes with ver 13.0 as that is where I'll most likely be heading next
I got Slackware 13 working with relatively few changes from what I did for 12.2.

Please note that all of this is only necessary if you actually want to boot Slackware from a partition on the RAID array. If you just want to boot Slackware from a non-RAID disk and then access the RAID array then you don't need the rest of this.

Here is the script that I used to make the "initrd".

Code:
ROOTDEVNAME="/dev/sdr2"		# Name of root device
LINUXVER="2.6.29.6-smp"		# Linux modules version
CLIBVER="2.9"			# C library version
ROOTDIR="/boot/initrd-tree"	# Location of root filesystm
# Get most of the needed programs from the normal mkinitrd
mkinitrd -k $LINUXVER -c -r "$ROOTDEVNAME" -f ext3
# Create root device
cp -a "$ROOTDEVNAME" "$ROOTDIR/dev"
# Copy scripts and programs
cp -p init "$ROOTDIR"
chmod u=rwx,g=rx,o=rx "$ROOTDIR/init"
cp -p /sbin/dmraid "$ROOTDIR/sbin"
for lib in \
   "libdevmapper.so.1.02" \
   "libc.so.6" "ld-linux.so.2" \
   "ld-$CLIBVER.so" "libc-$CLIBVER.so"
   do
   if [ -e "/lib/$lib" ] ; then
      cp -Pp "/lib/$lib" "$ROOTDIR/lib/$lib"
   else
      echo "Library file not found \"/lib/$lib\""
      exit 1
   fi
done
# Make the compressed image file
mkinitrd
Here are the modified parts of the "init" script that I used. Changes to normal "init" script in bold.

Code:
# Parse command line
for ARG in `cat /proc/cmdline`; do
  case $ARG in
    rescue)
      RESCUE=1
    ;;
    root=/dev/*)
      ROOTDEV=`echo $ARG | cut -f2 -d=`
    ;;
    rootfs=*)
      ROOTFS=`echo $ARG | cut -f2 -d=`
    ;;
    luksdev=/dev/*)
      LUKSDEV=`echo $ARG | cut -f2 -d=`
    ;;
    waitforroot=*)
      WAIT=`echo $ARG | cut -f2 -d=`
    ;;
    root=LABEL=*)
      ROOTDEV=`echo $ARG | cut -f2- -d=`
    ;;
    resume=*)
      RESUMEDEV=`echo $ARG | cut -f2 -d=`
    ;;
    0|1|2|3|4|5|6)
      RUNLEVEL=$ARG
    ;;
    single)
      RUNLEVEL=1
    ;;
  esac
done
Code:
# Load a custom keyboard mapping:
if [ -n "$KEYMAP" ]; then
  echo "${INITRD}:  Loading '$KEYMAP' keyboard mapping:"
  tar xzOf /etc/keymaps.tar.gz ${KEYMAP}.bmap | loadkmap
fi

# Find any dmraid detectable partitions
dmraid -ay

if [ "$RESCUE" = "" ]; then 
  # Initialize RAID:
  if [ -x /sbin/mdadm ]; then
    /sbin/mdadm -E -s >/etc/mdadm.conf
    /sbin/mdadm -A -s
  fi
  
  # Find root device if a label was given:
  if echo $ROOTDEV | grep -q "LABEL=" ; then
    ROOTDEV=`findfs $ROOTDEV`
  fi
I copied the standard "init" script from the "/boot/initrd-tree" directory and then edited the file, saving a copy in the directory with my script to make the initrd.

In order to create some more friendly device names I added a file, "/etc/udev/rules.d/10-local.rules" containing the following.

Code:
KERNEL=="dm-0", NAME="sdr", OPTIONS+="last_rule"
KERNEL=="dm-1", NAME="sdr1", OPTIONS+="last_rule"
KERNEL=="dm-2", NAME="sdr2", OPTIONS+="last_rule"
KERNEL=="dm-3", NAME="sdr4", OPTIONS+="last_rule"
KERNEL=="dm-4", NAME="sdr5", OPTIONS+="last_rule"
KERNEL=="dm-5", NAME="sdr6", OPTIONS+="last_rule"
KERNEL=="dm-6", NAME="sdr7", OPTIONS+="last_rule"
KERNEL=="dm-7", NAME="sdr8", OPTIONS+="last_rule"
To find out the correct information needed for the rules file I had to look at the devices created by "dmraid".

ls -l /dev/mapper

The minor device ID corresponds to the number after "dm-". I used a name of "sdrX" where X is the partition number. sdr is actually a valid scsi disk device name but it's very unlikely to be used since it corresponds to the 28th SCSI disk.

Depending on what device names you decide to use for your root and swap device you have to specify those in the grub boot loader "/boot/grub/menu.lst" file. Also, make sure that you create the device names in the "/dev" folder of your root device BEFORE "udev" runs or using some other boot CD.

Example grub "/boot/grub/menu.lst" entry.

Code:
title Linux
root (hd0,1)
kernel /boot/vmlinuz vga=791 root=/dev/sdr2 ro vt.default_utf8=0 load_ramdisk=1 ramdisk_size=4096
initrd /boot/initrd.gz
Here is what I added to my "/etc/fstab".

Code:
/dev/sdr5        swap             swap        defaults         0   0
/dev/sdr2        /                ext3        defaults         1   1
The simplest way to install Slackware is to use some other non-RAID disk first and get everything working before you copy the files to the RAID array. In any case, you have to build a copy of "dmraid" or get a copy of the binary file ahead of time.

You can install Slackware directly to the RAID array, but it is a bit messy. Here are the steps required.
  • Build "dmraid" or obtain a copy
  • Copy "/sbin/dmraid" and "/lib/libdevmapper.so.1.02" from a Slackware system to a floppy or CD
  • Boot the normal Slackware 13 setup disc
  • Copy "dmraid" from the floppy or CD to "/sbin"
  • Copy "libdevmapper.so.1.02" from the floppy or CD to "/lib"
  • Detect the RAID arrays using "dmraid -ay"
  • Look at the device names with "ls -l /dev/mapper"
  • Note the names, major and minor device IDs for later
  • Edit the Slackware setup script (see below)
  • Install Slackware normally except do not configure swap space
  • Before rebooting you must perform some other steps (see below)

For Slackware setup to recognize "dmraid" created devices, edit the "setup" script used by Slackware. You will have to edit it each time you boot since it's stored in a file on the boot CD. You can copy the edited script to a floppy disk and save it for the next time if you want.

cd /usr/lib/setup
vi setup

Do the editing and then use the script.

setup

The full path (including the name) of the script is "/usr/lib/setup/setup". The "setup" script file is in the "/usr/lib/setup" directory.

What you have to edit is two lines. I showed the edited text in bold.

Before.

Code:
vgchange -ay 1> /dev/null 2> /dev/null
if probe -l 2> /dev/null | egrep 'Linux$' 1> /dev/null 2> /dev/null ; then
 probe -l 2> /dev/null | egrep 'Linux$' | sort 1> $TMP/SeTplist 2> /dev/null
else
 dialog --title "NO LINUX PARTITIONS DETECTED" \
After

Code:
vgchange -ay 1> /dev/null 2> /dev/null
if fdisk -l /dev/mapper/pdc_ccfafbbhc 2> /dev/null | egrep 'Linux$' 1> /dev/null 2> /dev/null ; then
 fdisk -l /dev/mapper/pdc_ccfafbbhc 2> /dev/null | egrep 'Linux$' | sort 1> $TMP/SeTplist 2> /dev/null
else
 dialog --title "NO LINUX PARTITIONS DETECTED" \
Replace "pdc_ccfafbbhc" with the correct device name for your entire RAID array (without the partition number at the end). That should be one of the names you saw with "ls -l /dev/mapper".

That will fix ONLY the list of TARGET partitions. You will have to set up the swap space in "/etc/fstab" after installing Slackware.

Assuming that you have successfully install Slackware, you now have to make the RAID array able to boot.

Mount the new Slackware system on the RAID array.

mount /dev/mapper/pdc_ccfafbbhc1 /mnt

Replace "pdc_ccfafbbhc1" with the correct device name for your Linux partition on the RAID array.

Change to the root of the new system

chroot /mnt

Mount other needed devices

mount -t proc none /proc
mount -t sysfs none /sys


Create the devices required for your root and swap devices using "mknod".

mknod -m u=rw,g=rw,o= /dev/sdr2 b X Y
chown root:disk /dev/sdr2

Replace "X" and "Y" with the correct major and minor device ID's displayed previously by "ls -l /dev/mapper". Replace "sdr2" with whatever device name you want to use to refer to the RAID partition.

Build or select the kernel you want to use.
Create your initrd using the script that I provided.

Install the "grub" package from the "extra" folder on the Slackware CD.

Edit the "/boot/grub/menu.lst" file.

Installing "grub" to the RAID array can be a bit tricky and I've had better luck using a "grub" boot CD to do that. The commands are similar to this.

grub
root (hd0,1)
setup (hd0,1)
quit


Replace "(hd0,1)" with the correct designation for your Linux system. "hd0" is the first hard disk and "1" is the second partition (partition 2). If you want to install to the MBR (which I don't recommend) you can use "setup (hd0)". From a grub boot CD, press the "C" key on the keyboard for command mode instead of typing in "grub". The commands following grub are the same.

Another useful command that you may want to use first in grub is this one.

find /boot/grub/menu.lst

That will list the grub device names where the file is seen by grub.

When you think that you have the system ready to boot, unmount everything and try it.

umount /sys
umount /proc
exit
umount /mnt


Ctrl Alt Del to reboot.

If you have a problem you will have to go through the steps again from the beginning except for running Slackware "setup". Use "chroot" again and fix any problems.

I found it much easier to temporarily connect a non-RAID hard disk to test all this out and provide something that can boot if Slackware in the RAID array doesn't. Alternatively you can use a boot CD with grub and "dmraid". I wrote a script to create a CD. First I had to build the kernel with a "-CD" suffix.

Code:
#!/bin/sh
#
# Script to create a bootable Linux CD
#
CDKERNELVER="2.6.29.6"		# CD Kernel version number
CDLINUXVER="${CDKERNELVER}-CD"	# CD Linux version name
LINUXVER=`uname -r`		# Booted Linux version name
FSCOMP="./rootfs.gz"		# root filesystem compressed file
FSBIN="./rootfs.bin"		# root filesystem file
ROOTFS="./rootfs"		# Where root filesystem is mounted
OUTFILE="./bootcd.iso"		# Output file
CONFIG="./config"		# Location of configuration files
CDROOT="./cdroot"		# Where to store the CD files	
GRUBBIN="/usr/sbin"		# Location of grub binary files
GRUBLDR="/usr/lib/grub/i386-pc"	# Location of grub boot loader files
BOOTIMAGE="/usr/src/linux-$CDKERNELVER/arch/i386/boot/bzImage"
SYSTEMMAP="/usr/src/linux-$CDKERNELVER/System.map"
CLIBVER="2.9"			# C library version

# CD Layout
CDBOOT="boot"
CDGRUB="$CDBOOT/grub"

# If CD files already exist, clean them
if [ -d "$CDROOT" ] ; then
   rm -R "$CDROOT"
fi

# If root filesystem file exists, clean it
if [ -f "$FSBIN" ] ; then
   rm "$FSBIN"
fi

# Create the root filesystem file
dd if=/dev/zero of="$FSBIN" bs=1k count=32768
mke2fs -m 0 -N 2000 -F "$FSBIN"
tune2fs -c 0 -i 0 "$FSBIN"

# Mount the root filesystem
mount -t ext2 -o loop "$FSBIN" "$ROOTFS"

# Create directories
for dir in \
   "bin" "dev" "etc" "mnt" "proc" "sbin" "sys" "usr" \
   "tmp" "var" "var/log" "var/run" "var/tmp" "root"
   do
   if [ ! -d "$ROOTFS/$dir" ] ; then
      mkdir -p "$ROOTFS/$dir"
   fi
done

# Create devices
pushd "$ROOTFS/dev" > /dev/null
# Required devices
mknod -m u=rw,g=,o= console c 5 1
chown root:tty console 
mknod -m u=rw,g=rw,o= ram0 b 1 0
chown root:disk ram0 
mknod -m u=rw,g=r,o= mem c 1 1
chown root:kmem mem 
mknod -m u=rw,g=r,o= kmem c 1 2
chown root:kmem kmem 
mknod -m u=rw,g=rw,o=rw null c 1 3
chown root:root null 
mknod -m u=rw,g=rw,o=rw zero c 1 5
chown root:root zero 
mkdir vc
chmod u=rwx,g=rx,o=rx vc
chown root:root vc
mknod -m u=rw,g=rw,o= vc/1 c 4 1
chown root:tty vc/1 
ln -s vc/1 tty1
mknod -m u=rw,g=rw,o= loop0 b 7 0
chown root:disk loop0 
# IDE Disks (up to 20) max 64 partitions per disk
drives=4
partitions=9
if [ $drives -gt 0 ] ; then
   majors=( 3 22 33 34 56 57 88 89 90 91)
   for drv in `seq 0 $(($drives-1))` ; do
      dev="abcdefghijklmnopqrst"
      dev=hd${dev:$drv:1} 
      major=${majors[$(($drv/2))]}  
      minor=$(( ($drv%2) * 64 ))
      mknod -m u=rw,g=rw,o= $dev b $major $minor
      chown root:disk $dev
      if [ $partitions -gt 0 ] ; then 
         for i in `seq 1 $partitions` ; do
            mknod -m u=rw,g=rw,o= $dev$i b $major $(($minor+$i)) 
            chown root:disk $dev$i
         done
      fi
   done
fi
# SCSI Disks (0 to 127) max 16 partitions per disk
drives=4
partitions=9
if [ $drives -gt 0 ] ; then
   majors=( 8 65 66 67 68 69 70 71)
   for drv in `seq 0 $(($drives-1))` ; do
      dev="abcdefghijklmnopqrstuvwxyz"
      if [ $drv -lt 26 ] ; then
         dev=sd${dev:$drv:1}
      else
         dev=sd${dev:$(($drv/26-1)):1}${dev:$(($drv%26)):1}
      fi
      major=${majors[$(($drv/16))]}  
      minor=$(( ($drv%16) * 16 ))
      mknod -m u=rw,g=rw,o= $dev b $major $minor
      chown root:disk $dev
      if [ $partitions -gt 0 ] ; then 
         for i in `seq 1 $partitions` ; do
            mknod -m u=rw,g=rw,o= $dev$i b $major $(($minor+$i)) 
            chown root:disk $dev$i
         done
      fi
   done
fi
# Floppy disks A and B
for i in `seq 0 1` ; do
   mknod -m u=rw,g=rw,o= fd$i b 2 $i 
   chown root:floppy fd$i
done
# Device mapper for "dmraid"
mkdir mapper
chmod u=rwx,g=rx,o=rx mapper
chown root:root mapper
mknod -m u=rw,g=rw,o= mapper/control c 10 63
chown root:root mapper/control
# Done with devices
popd > /dev/null

# Copy the configuration files
for cfg in \
   "fstab" "group" "inittab" "passwd" "rc" "shadow" "securetty" \
   "termcap" "nsswitch.conf" "profile" "HOSTNAME" "hosts" \
   "DIR_COLORS"
   do
   cp "$CONFIG/etc/$cfg" "$ROOTFS/etc/$cfg"
done

# Copy programs
for prg in \
   "agetty" "basename" "bash" "cat" "chgrp" "chmod" "chown" "chroot" \
   "chvt" "clear" "cmp" "cp" "cut" "date" "dd" "df" "dirname" "dmesg" \
   "du" "echo" "env" "false" "fbset" "find" "free" "grep" \
   "gunzip" "gzip" "head" "hostname" "init" "ifconfig" "kill" "killall" \
   "ln" "login" "ls" "mkdir" "mknod" "more" "mount" "mv" "ps" "pwd" \
   "reboot" "rm" "rmdir" "sh" "shutdown" "sleep" "stty" "sulogin" \
   "sync" "syslogd" "tail" "tar" "tee" "test" "touch" "tr" "true" "tty" \
   "umount" "uname" "uptime" "yes" "zcat" "vi" "elvis" "sed" "sort" \
   "uniq" "insmod" "lsmod" "rmmod" "bzip2" \
   "modprobe" "fdisk" "cfdisk" "dmraid" "mkfs" "mkdosfs" "mke2fs" \
   "mkfs.ext2" "mkfs.ext3" "mkfs.cramfs" "mkfs.reiserfs" \
   "mkreiserfs" "reiserfstune" "tune2fs" \
   "fsck" "dosfsck" "e2fsck" "fsck.ext2" "fsck.ext3" \
   "fsck.reiserfs" "reiserfsck" \
   "id" "dircolors" "shutdown" "telinit" "ldd"
   do
   found=false
   for dir in "sbin" "usr/sbin" "bin" "usr/bin" ; do
      if [ -e "/$dir/$prg" ] ; then
         found=true
         if [ ! -d `dirname "$ROOTFS/$dir/$prg"` ] ; then
            mkdir -p `dirname "$ROOTFS/$dir/$prg"`
         fi
         cp -Pp "/$dir/$prg" "$ROOTFS/$dir/$prg"
      fi
   done
   if [ $found != true ] ; then
      echo "Binary file not found \"$prg\""
      umount "$ROOTFS"
      exit 1
   fi
done

# Copy grub boot loader programs
dir="$ROOTFS$GRUBBIN"
if [ ! -d "$dir" ] ; then
   mkdir -p "$dir"
fi
cp -Pp $GRUBBIN/grub* "$dir"
# Copy grub boot loader files
dir="$ROOTFS$GRUBLDR"
if [ ! -d "$dir" ] ; then
   mkdir -p "$dir"
fi
cp -Pp $GRUBLDR/* "$dir"

# Copy libraries
for lib in \
   "ld-linux.so.2" "ld-$CLIBVER.so" "libc.so.6" "libc-$CLIBVER.so" \
   "libresolv.so.2" "libresolv-$CLIBVER.so" \
   "libacl.so.1" "libacl.so.1.1.0" "libattr.so.1" "libattr.so.1.1.0" \
   "libcrypt.so.1" "libcrypt-$CLIBVER.so" "libdl.so.2" "libdl-$CLIBVER.so" \
   "libtermcap.so.2" "libtermcap.so.2.0.8" "libblkid.so.1" "libblkid.so.1.0" \
   "libuuid.so.1" "libuuid.so.1.2" "libproc-3.2.7.so" "libnss_files.so.2" \
   "libnss_files-$CLIBVER.so" "libnss_compat.so.2" "libnss_compat-$CLIBVER.so" \
   "libnss_dns.so.2" "libnss_dns-$CLIBVER.so" "libnss_nis.so.2" \
   "libnss_nis-$CLIBVER.so" "librt.so.1" "librt-$CLIBVER.so" "libpthread.so.0" \
   "libpthread-$CLIBVER.so" "libpthread-$CLIBVER.so" "libe2p.so.2" "libe2p.so.2.3" \
   "libnsl.so.1" "libnsl-$CLIBVER.so" "libncurses.so.5" "libncurses.so.5.7" \
   "libncursesw.so.5" "libncursesw.so.5.7" "libgpm.so.1" "libgpm.so.1.19.0" \
   "libcom_err.so.2" "libcom_err.so.2.1" "libext2fs.so.2" "libext2fs.so.2.4" \
   "libm.so.6" "libm-$CLIBVER.so" "libdevmapper.so" "libdevmapper.so.1.02" \
   "libcap.so.2.16" "libcap.so.2"
   do
   found=false
   for dir in "lib" "lib/tls" ; do
      if [ -e "/$dir/$lib" ] ; then
         found=true
         if [ ! -d `dirname "$ROOTFS/$dir/$lib"` ] ; then
            mkdir -p `dirname "$ROOTFS/$dir/$lib"`
         fi
         cp -Pp "/$dir/$lib" "$ROOTFS/$dir/$lib"
      fi
   done
   if [ $found != true ] ; then
      echo "Library file not found \"$lib\""
      umount "$ROOTFS"
      exit 1
   fi
done

# Copy modules
dir="lib/modules/$CDLINUXVER"
for module in \
   "LEAVE-THIS-HERE"
   do
   if [ "$module" != "LEAVE-THIS-HERE" ] ; then
      module="$dir/$module.ko"
      if [ -e "/$module" ] ; then
         if [ ! -d `dirname "$ROOTFS/$module"` ] ; then
            mkdir -p `dirname "$ROOTFS/$module"`
         fi
         cp -Pp "/$module" "$ROOTFS/$module"
      else
         echo "Module file not found \"/$module\""
         umount "$ROOTFS"
         exit 1
      fi
   fi
done

# Update module dependencies
if [ ! -d "$ROOTFS/$dir" ] ; then
   mkdir -p "$ROOTFS/$dir"
fi
depmod -b "$ROOTFS" -v "$CDLINUXVER" -F "$SYSTEMMAP"

# Copy terminal information files
dir="usr/share/terminfo"
for file in \
   "linux" "linux-m" "linux-nic" \
   "vt100" "vt100-am"
   do
   file="$dir/${file:0:1}/$file"
   if [ -e "/$file" ] ; then
      if [ ! -d `dirname "$ROOTFS/$file"` ] ; then
         mkdir -p `dirname "$ROOTFS/$file"`
      fi
      cp -Pp "/$dirl/$file" "$ROOTFS/$file"
   else
      echo "Terminal info file not found \"/$file\""
      umount "$ROOTFS"
      exit 1
   fi
done

# Copy other files
for file in \
   "usr/tmp" "etc/services" "usr/lib/libncurses.so" \
   "usr/lib/libncurses.so.5" "usr/lib/libncursesw.so" \
   "usr/lib/libncursesw.so.5"
   do
   if [ -e "/$file" ] ; then
      if [ ! -d `dirname "$ROOTFS/$file"` ] ; then
         mkdir -p `dirname "$ROOTFS/$file"`
      fi
      cp -Pp "/$file" "$ROOTFS/$file"
   else
      echo "File not found \"/$file\""
      umount "$ROOTFS"
      exit 1
   fi
done

# Create empty files
for file in \
   "var/log/wtmp" "var/run/utmp"
   do
   if [ ! -d `dirname "$ROOTFS/$file"` ] ; then
      mkdir -p `dirname "$ROOTFS/$file"`
   fi
   touch "$ROOTFS/$file"
done

# Unmount the root filesystem
umount "$ROOTFS"

# Create the compressed filesystem
dd if="$FSBIN" bs=1k | gzip -v9 > "$FSCOMP"

# Create CD directories
mkdir "$CDROOT" 
mkdir "$CDROOT/$CDBOOT"
mkdir "$CDROOT/$CDGRUB"

# Copy grub boot loader
cp $GRUBLDR/stage2_eltorito "$CDROOT/$CDGRUB"
cp "$CONFIG/grub/menu.lst" "$CDROOT/$CDGRUB"

# Copy boot image
cp "$BOOTIMAGE" "$CDROOT/$CDBOOT/vmlinuz"

# Copy the compressed root filesystem
cp "$FSCOMP" "$CDROOT/$CDBOOT/rootfs.gz"

# Create the ISO image for the CD
mkisofs -o "$OUTFILE" -r \
   -b "$CDGRUB/stage2_eltorito" -no-emul-boot \
   -boot-load-size 4 -boot-info-table "$CDROOT"
I didn't post all the other configuration files required to make the CD but I can post those on my web site if you want them.

I'll be glad to provide a copy of the boot CD image and files that I have. I'm not sure if I compiled the Marvell driver into the kernel but I can check that.
 
Old 09-16-2009, 02:17 PM   #13
Ja5
LQ Newbie
 
Registered: Aug 2009
Location: Texas
Distribution: Slackware
Posts: 5

Original Poster
Rep: Reputation: 0
Sorry I haven't responded, work has been killing me, but WOW you are WAY beyond me! Very impressive! There is no way that I would have made even a little headway on this. In any event, I am downloading Slackware 13 as I type this an would greatly appreciate if I could get a copy of the boot CD image with the Marvell driver in the Kernel to see if I can get a tri-boot going. I haven't used Slack in a while and have been missing it. Thanks in advance!
 
Old 09-16-2009, 09:03 PM   #14
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by Ja5 View Post
Sorry I haven't responded, work has been killing me, but WOW you are WAY beyond me! Very impressive! There is no way that I would have made even a little headway on this. In any event, I am downloading Slackware 13 as I type this an would greatly appreciate if I could get a copy of the boot CD image with the Marvell driver in the Kernel to see if I can get a tri-boot going. I haven't used Slack in a while and have been missing it. Thanks in advance!
RAID Boot CD and Scripts

The boot CD will let you log in as "root" with no password after booting.

Use the following command to detect RAID arrays.

dmraid -ay

Then look at detected device names.

ls -l /dev/mapper

Mount the devices or do whatever else you want. Usually that will be copying an existing Slackware system or installing "grub". To use Slackware Setup you need to follow my posted instructions or install to a "normal" hard disk first. It's a lot easier if you install to a normal hard disk and get RAID working first. Then use the boot CD to copy the files with "cp -a".

mkdir /mnt/nonraid
mkdir /mnt/raid
mount /dev/hda1 /mnt/nonraid
mount /dev/mapper/blah-blah /mnt/raid
cp -a /mnt/nonraid/* /mnt/raid
umount /mnt/raid
umount /mnt/nonraid


Replace that "blah-blah" with the very long file name for the partition where you want to install Linux in the RAID array.
 
Old 09-16-2009, 09:37 PM   #15
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
The files you want to download are "sasraid.zip" and "sasbootcd.zip". If those aren't the ones you downloaded then download those files. The other ones I had there didn't support SAS.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Serial Numbers for SAS drives in RedHat ES 4.0 & 5.1 QuietLeni Linux - Hardware 8 02-19-2013 05:52 AM
MPT driver installation on Debian 4.0 to support HP SC44Ge SAS host bus adaptor fireball1974 Debian 12 06-01-2011 09:19 AM
Can't load LSI SAS Megaraid on Fedora 9 Installation erickwellem Fedora - Installation 4 07-23-2008 02:33 AM
Very Poor Disk Performance SAS 15000 RPM Drives JustinK101 Linux - Newbie 4 07-02-2008 07:57 PM
No Drives found with Proliant ML 310 G4 with SAS on RHEL4 dambert Linux - Hardware 3 02-08-2007 09:05 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 04:08 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration