LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware > Slackware - Installation
User Name
Password
Slackware - Installation This forum is for the discussion of installation issues with Slackware.

Notices


Reply
  Search this Thread
Old 09-21-2007, 06:09 PM   #1
hb950322
LQ Newbie
 
Registered: Dec 2006
Posts: 24

Rep: Reputation: 15
Question RAID 1 with Asus P5N-E SLI and Slack 12.0


There are a lot of threads and posts of this theme, but first of all I need a basic and fundamental hint: The board has a bulid-in RAID-Controller which is enabled in BIOS an the two SATA 400Gig-HD's are set up as a RAID1 by a RAID setup utility, which comes up before the OS is booting.

So I thought that this two discs look as one to the OS and are to set up as if so. Installation on /dev/sda was no problem, but the installation of LILO in the MBR did not work. After the reboot I get a "no system disc or disc error" failure an the os can only be booted from the installation DVD.

So, everything (as always..) is a little bit more tricky if you do it the first time. Any hints for documentation that fit my situation are welcome.

Greets
Henric
 
Old 12-17-2007, 12:13 AM   #2
BiafraRepublic
LQ Newbie
 
Registered: Oct 2004
Distribution: Slackware 12.0
Posts: 6

Rep: Reputation: 0
You might want to use a PATA HDD and set it to boot first from that after installing LILO to the MBR there.
 
Old 01-15-2008, 06:59 AM   #3
peterstoops
LQ Newbie
 
Registered: Nov 2003
Distribution: Fedora, RedHat, SuSE, SlackWare
Posts: 1

Rep: Reputation: 0
Faced a similar issue when installing on a Compaq DL360 G3

After installation, I manually installed grub (in extras) as it provides you with some more flexibility.
I had to edit /boot/grub/devices.map to have it change the disk order too.
after that,
# grub-install /dev/cciss/c0d0
(in my case! yours may be /dev/sda)

Also, make sure to check that your boot partition is set "bootable".

It definately took some time figuring it all out.

Rgds,
Peter
 
Old 01-20-2008, 10:40 AM   #4
tellef
LQ Newbie
 
Registered: Aug 2005
Location: Norway
Distribution: Slackware & Debian.
Posts: 23

Rep: Reputation: 15
Raid 1

Most so called raid-cotrollers mounted on a consumer motherboard are not raid controllers at all. I dont know your model though. But during the install you can see if something is weird when dth drives are listed directly but no raid setups. You should turnm off the raid-mockup in BIOS and use the drives as they are. LILO is quite possible to set up for raid1 booting even if one of the drives get destroyed. It will simply jump to the next drive in it`s list.
Your post is a bit old though, maybe you have solved this already.
If not, check out this article on the issue.
 
Old 01-23-2008, 10:26 PM   #5
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 256Reputation: 256Reputation: 256
RAID with Slack 12.0

I got RAID working on my Promise FastTrack controller but it was difficult. It's easier if you can compile the kernel on another computer and create a boot CD to install Linux. I'll describe the issues and how I got around them.

I had to run "dmraid" for my RAID sets to be recognized, since Promise has no RAID driver for kernel version 2.6 and the Linux driver doesn't support RAID. That means "dmraid" had to be compiled ahead of time against the kernel and libraries for Slackware 12.0. I also had to compile a library used by "dmraid". Those files can be copied to the ram drive from a floppy or CD after booting the Slackware installation CD.

After booting the installation CD, it is necessary to run "dmraid". Then Slackware can be installed to the correct devices under "/dev/mapper".

The "dmraid" program creates very long device names under /dev/mapper and that doesn't work well with the initialization scripts in Linux. I had to note the major and minor unit numbers for the mapper devices corresponding to the devices after running "dmraid".

Some of the Linux initialization occurs before UDEV starts and some occurs afterward. In order to get that to work I had to create additional devices under /dev while the installation CD was booted.

I created these devices.

/dev/sdr3 - Linux root partition
/dev/sdr6 - Linux swap partition

The device names are fake, I just picked those to use so that the initialization scripts could use simple device names. I had to set those to the major and minor unit numbers for the actual /dev/mapper devices.

Because I created the fake device names I could use "root=/dev/sdr3" in the GRUB menu.lst file and the "/etc/fstab" file. I also used "/dev/sdr6" for the swap space in the "/etc/fstab" file. Those all get used before UDEV starts.

Since UDEV doesn't create usable device names (by default) I had to add some UDEV rules to create the names. I created a file called "/etc/udev/rules.d/10-local.rules" with the following.

KERNEL=="dm-2", NAME="sdr"
KERNEL=="dm-3", NAME="sdr1"
KERNEL=="dm-4", NAME="sdr3"
KERNEL=="dm-5", NAME="sdr5"
KERNEL=="dm-6", NAME="sdr6"

I had to use an "initrd" RAM disk image to run "dmraid" during Linux boot. In order to do that, I had to create a custom "init" script for the "initrd".

I copied the standard "init" script from the "/boot/initrd-tree" directory after using "mkinitrd". Then I added statements to load "dmraid".

...
# Find any dmraid detectable partitions
dmraid -ay

# Switch to real root partition:
echo 0x0100 > /proc/sys/kernel/real-root-dev
...

I copied the new "init" script back to "/boot/initrd-tree" and then ran "mkinitrd" again to create the ram disk with the modified script.

Everything that I've mentioned so far has to be done while the installation CD is booted. I used a few commands so that I could do the mkinitrd and do other things before booting Linux the first time.

chroot /mnt/tmp
mount -t proc proc /proc
mount -t sysfs sysfs /sys

After I finished doing all the necessary things including installing GRUB then I unmounted everything.

umount /sys
umount /proc
exit

LILO doesn't seem to work with mapper devices so I had to use GRUB instead of LILO. That meant compiling grub and then installing it.

It's easier to install GRUB if you make a GRUB boot floppy or CD so that you can use "native" mode. In my case, GRUB was in the third partition so I did this after booting a GRUB floppy.

root (hd0,2)
setup (hd0,2)

That works because "native" mode calls the BIOS and the RAID BIOS takes care of accessing the RAID array correctly. If you do that from Linux you have to make sure that the GRUB device names are correctly mapped to the "/dev/mapper" device names using the "/boot/grub/device.map" file.

(hd0,2) /dev/mapper/cryptic-device-name

The "dmraid" program creates some very strange device names that have to be used where I showed "cryptic-device-name". You have to use those names while installing Slackware, or create your own names temporarily under /dev using the correct major and minor unit numbers from the cryptic names.

All of this only works if you have a Linux SATA driver compatible with your RAID controller (non-raid mode). Luckily there was an "sata-proimse" driver and the latest verion supports the SATA and PATA ports on the controller.

One advantage to using this method over a proprietary RAID driver is that it works with the disks connected on any SATA controller compatible with Linux. That means the drives can be moved to a different SATA controller and read in the event of a motherboard failure. In my case there was no proprietary RAID driver available so I had no choice.

Last edited by Erik_FL; 01-23-2008 at 10:35 PM.
 
Old 01-25-2008, 12:40 AM   #6
wdk23411
LQ Newbie
 
Registered: Jan 2008
Posts: 9

Rep: Reputation: 0
Acording to Erik_FL's reply,I should compile and run dmraid tool first if I want to install the Slackware12 on a ich9r based raid0 system.
I had tried that yesterday and failed. How could I compile and run dmraid after the slackware installation DVD boot? I have no idea about that. Could Someone just show me a instruction step by step?

ps:Sorry about my English. :-)
 
Old 01-25-2008, 10:21 AM   #7
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 256Reputation: 256Reputation: 256
Quote:
Originally Posted by wdk23411 View Post
Acording to Erik_FL's reply,I should compile and run dmraid tool first if I want to install the Slackware12 on a ich9r based raid0 system.
I had tried that yesterday and failed. How could I compile and run dmraid after the slackware installation DVD boot? I have no idea about that. Could Someone just show me a instruction step by step?

ps:Sorry about my English. :-)
You can't compile the "dmraid" program using the Slackware installation DVD. You will have to install Slackware on some other computer or a non-RAID hard disk on the same computer.

Another option is to use Microsoft Virtual PC or VirtualBox to install Slackware to a Windows virtual machine so that you can compile the files.

You can download the copy of the files that I compiled here.
dmraid.tar

Extract the files to a folder.

tar -xvf dmraid.tar

Copy all the files except for "dmraid" to the "/lib" directory.
Copy the "dmraid" file to the "/sbin" directory.

Now you should be able to detect the devices.

dmraid -ay

Remember, if you are doing this with the Slackware boot disc during installation, you will be copying into a ram disk. After you can mount the RAID array, then copy the files to the RAID array where you installed Slackware.

If you still have problems installing, download this boot CD image that I made. It has "dmraid" and the mapper device.
bootcd.bin
 
Old 01-25-2008, 01:14 PM   #8
wdk23411
LQ Newbie
 
Registered: Jan 2008
Posts: 9

Rep: Reputation: 0
Thanks a lot. I will just try the tar ball during the installation.
 
Old 01-26-2008, 05:40 PM   #9
tellef
LQ Newbie
 
Registered: Aug 2005
Location: Norway
Distribution: Slackware & Debian.
Posts: 23

Rep: Reputation: 15
The slackware 12 installation dvd gives you mdadm tools. With these you can create, assemble and turn on raid disk arrays. Once that is done you do not even have to reboot before proceeding with slackware`s installer program. Wich will gladly accept installing to /dev/md0, /dev/md1, etc.
I have read many posts where it is stated that you must install in a regular way first and then sort of turn the finished installation over to a raided disksystem. I never had to do any of that since ver.11.
1. use fdisk/cfdisk from the inst.dvd to create whatever raiddevices you like. (linux raid autodetect)
2. use fdisk to do it again, on the second disk....
3. use the mdadm toolset to setup and turn on the raiddevices.
4. install good old slacketislack.

But of course, I am talking about linux software raid.....
 
Old 01-26-2008, 07:10 PM   #10
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 256Reputation: 256Reputation: 256
Quote:
Originally Posted by tellef View Post
The slackware 12 installation dvd gives you mdadm tools. With these you can create, assemble and turn on raid disk arrays. Once that is done you do not even have to reboot before proceeding with slackware`s installer program. Wich will gladly accept installing to /dev/md0, /dev/md1, etc.
I have read many posts where it is stated that you must install in a regular way first and then sort of turn the finished installation over to a raided disksystem. I never had to do any of that since ver.11.
1. use fdisk/cfdisk from the inst.dvd to create whatever raiddevices you like. (linux raid autodetect)
2. use fdisk to do it again, on the second disk....
3. use the mdadm toolset to setup and turn on the raiddevices.
4. install good old slacketislack.

But of course, I am talking about linux software raid.....
This thread is in regard to "fake hardware RAID" implemented by the BIOS software and proprietary drivers (Intel Array Management software). You are correct that Linux RAID will do most of the same functions. There are only a few reasons to use "fake hardware RAID".
  • to boot from RAID
  • to allow multiple operating systems to access the same RAID arrays
  • your OS (XP Home, Vista Home, DOS) doesn't support software RAID

From a performance standpoint, the two approaches (Linux software and fake hardware) are about the same. Fake hardware RAID controllers are just ordinary SATA or IDE controllers with BIOS extensions (firmware) to allow booting and formatting the RAID arrays on the disks. They use an operating system driver to make each RAID array on multiple disks appear to be one hard disk device. The driver still has to do as many I/O transfers as software RAID because the disk hardware looks like individual disks.

In some cases it is possible to boot from a Linux mirror set, but it usually isn't possible with a stripe set. Even with a mirror set, booting occurs from only one of the drives unless the BIOS can be configured to boot from the other disk in the array. Fake hardware RAID provides for booting all kinds of arrays with redundancy for mirrors.

Fake hardware RAID controllers use proprietary RAID formats, but there are often drivers to support multiple operating systems. Using fake hardware RAID allows more than one OS to access partitions in the RAID array. The Linux "dmraid" utilitiy allows non-Linux RAID arrays to be recognized and the appropriate devices configured using standard disk drivers and the device-mapper. Other operating systems have access because they have drivers that understand the proprietary RAID arrays. That's true even if the other operating systems don't support software RAID since the driver contains the RAID functions.

Fake hardware RAID has its problems. If there is no proprietary RAID driver for an OS and no facility like "dmraid" to recognize metadata for arrays then an operating system can't access the RAID arrays. If the hardware fails, it may be impossible to access the data without purchasing a compatible RAID controller. The "dmraid" program opens the door for other hard disk controllers to be used for data recovery. Proprietary drivers may have more bugs and be less reliable.
 
Old 01-26-2008, 07:27 PM   #11
tellef
LQ Newbie
 
Registered: Aug 2005
Location: Norway
Distribution: Slackware & Debian.
Posts: 23

Rep: Reputation: 15
raided.

I certainly agree with you on the benefit of having the array(s) accessile from more than one OS. But as far as booting goes i would say that most hardware can be configured to boot from the second disk if the first one has failed. And linux-bootmanagers can be configured for this as well.
Booting from raid0 is not a good idea anyway, is it? Why would you want to?
I can see why you have tings installed on raid0, but not booting off it.

I read an article some time ago that i unfortunately hav no link to here and now. It did a rather thorough walk-through of three approaches: the fake-raid solution, software-raid, and a real-deal controller that is likely to cost more alone than the rest of your box. No need to wonder which was best, the expensive hardware won every aspect of course. But i dont recall reading about the ability to access the same data from several systems. Perhaps the fake-raid chipset deal isn`t so bad after all....

Anyways, my webserver has been running for five years now, and i have had three disk crashes. None of them gave me any worries other that issuing mdadm commands and tossing in a new drive in the drawer.
I guess either way could work
 
Old 01-30-2008, 04:51 AM   #12
wdk23411
LQ Newbie
 
Registered: Jan 2008
Posts: 9

Rep: Reputation: 0
I had installed the slackware12 on the raid0 driven by the dmraid. But how could I boot from it use the grub instead of the Lilo.
 
Old 01-30-2008, 10:55 AM   #13
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 256Reputation: 256Reputation: 256
Quote:
Originally Posted by wdk23411 View Post
I had installed the slackware12 on the raid0 driven by the dmraid. But how could I boot from it use the grub instead of the Lilo.
In order to boot from a RAID 0 array, it has to be created and used on a hardwre RAID controller (or fake hardware RAID controller). Those have a BIOS ROM that loads during the computer BIOS startup. The BIOS ROM allows booting from a RAID array. GRUB uses the BIOS to read the Linux kernel into memory.

Download Grub Legacy

Extract the files from the tarball to a directory under /usr/src and then follow the instructions to make grub.

Copy the required boot files to /boot/grub and then create a "menu.lst" file.

Code:
default 0
timeout 5

title Linux
root (hd0,2)
kernel /boot/vmlinuz vga=773 root=/dev/sdr1 load_ramdisk=1 ramdisk_size=4096
initrd /boot/initrd.gz

title Windows XP
rootnoverify (hd0,0)
chainloader +1
NOTE: Change "/dev/sdr1" to the correct root device.

Use the command "info grub" to find out how to make a GRUB boot floppy, or a GRUB boot CD. Boot grub from floppy or CD and press C to enter the command mode of GRUB.

Use these commands to install GRUB to your master boot record or partition boot sector.

To install to MBR:

root (hd0,0)
setup (hd0)

To install to partition boot sector:

root (hd0,0)
setup (hd0,0)

If your Linux partition containing GRUB is in some other location then change "(hd0,0)".

First Hard Disk, First Primary partition - (hd0,0)
First Hard Disk, Second Primary partition - (hd0,1)
First Hard Disk, Third Primary partition - (hd0,2)
First Hard Disk, Fourth Primary partition - (hd0,3)
First Hard Disk, First Logical partition - (hd0,4)
First Hard Disk, Second Logical partition - (hd0,5)

I repeated some of the information that I already posted with a bit more detailed instructions below.

In order for Linux to boot from a "dmraid" device, it is necessary to use an "initrd" RAM disk to run "dmraid". First, use "mkinitrd" to create "/boot/initrd.gz". Next, edit the file "/boot/initrd-tree/init" and add the line to run "dmraid" as shown below in bold. After editing the file, use "mkinitrd" again with no options or parameters to create "initrd.gz" again using the modified "init".

Code:
INITRD=`cat /initrd-name`
ROOTDEV=`cat /rootdev`
ROOTFS=`cat /rootfs`
LUKSDEV=`cat /luksdev`

# Mount /proc and /sys:
mount -n proc /proc -t proc
mount -n sysfs /sys -t sysfs

# Load kernel modules:
if [ ! -d /lib/modules/`uname -r` ]; then
  echo "No kernel modules found for Linux `uname -r`."
elif [ -x ./load_kernel_modules ]; then # use load_kernel_modules script:
  echo "${INITRD}:  Loading kernel modules from initrd image:"
  . ./load_kernel_modules
else # load modules (if any) in order:
  if ls /lib/modules/`uname -r`/*.*o 1> /dev/null 2> /dev/null ; then
    echo "${INITRD}:  Loading kernel modules from initrd image:"
    for module in /lib/modules/`uname -r`/*.*o ; do
      insmod $module
    done
    unset module
  fi
fi

# Initialize LVM:
if [ -x /sbin/vgscan ]; then
  /sbin/vgscan --mknodes --ignorelockingfailure
  sleep 10
  /sbin/vgchange -ay --ignorelockingfailure
fi

# Make encrypted partitions available:
# The useable device will be under /dev/mapper/
if [ -x /sbin/cryptsetup ]; then
  if /sbin/cryptsetup isLuks ${LUKSDEV} ; then
    /sbin/cryptsetup luksOpen ${LUKSDEV} $ROOTDEV </dev/systty >/dev/systty 2>&1
    ROOTDEV="/dev/mapper/${ROOTDEV}"
  fi
fi

# Find any dmraid detectable partitions
dmraid -ay

# Switch to real root partition:
echo 0x0100 > /proc/sys/kernel/real-root-dev
mount -o ro -t $ROOTFS $ROOTDEV /mnt
if [ ! -r /mnt/sbin/init ]; then
  echo "ERROR:  No /sbin/init found on rootdev (or not mounted).  Trouble ahead."
  exit 1
fi
unset ERR
umount /proc
umount /sys
echo "${INITRD}:  exiting"
exec switch_root /mnt /sbin/init $@
The "dmraid" program creates very long device names under "/dev/mapper". Also, those devices don't exist in "/dev" of the Linux root partition. I found that it was easier to create some fake device names such as "/dev/sdr1" using the correct major and minor unit numbers corresponding to the names under "/dev/mapper". Only two permanent names are required, one for the root partition and one for the swap partition. Those have to be created when UDEV isn't running because they are used before UDEV starts.

Once UDEV starts you also need to have names for your disks. I created a file called "/etc/udev/rules.d/10-local.rules" to create some fake device names.

Code:
KERNEL=="dm-2", NAME="sdr"
KERNEL=="dm-3", NAME="sdr1"
KERNEL=="dm-4", NAME="sdr3"
KERNEL=="dm-5", NAME="sdr5"
KERNEL=="dm-6", NAME="sdr6"
Your file will probably be slightly different.

By using the fake device names, the "/etc/fstab" file can refer to the partitions. It's very important that the permanent names created for the root and swap partitions are the same as the names created later by UDEV. The "/etc/fstab" file is used before UDEV starts and also after UDEV starts.
 
Old 01-30-2008, 09:53 PM   #14
wdk23411
LQ Newbie
 
Registered: Jan 2008
Posts: 9

Rep: Reputation: 0
Should I copy the dmraid file to any directory on the Raid0 array before edit the file "/boot/initrd-tree/init" , such as /boot or /usr
 
Old 01-31-2008, 04:59 AM   #15
wdk23411
LQ Newbie
 
Registered: Jan 2008
Posts: 9

Rep: Reputation: 0
There is another problem. How could I compile the grub when I have just installed the slackware12 system. My problem is I could't make the system startup when I finished the installation.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux NUBE..cant install slack 12 on an Asus p5nd2-sli deluxe slpdave Slackware 14 08-01-2007 08:22 AM
LXer: ASUS P5N-E SLI on Linux LXer Syndicated Linux News 0 05-24-2007 07:16 PM
4GiB in Asus A8N-SLI Deluxe causes kernel panic gzunk Linux - Hardware 1 03-06-2007 11:52 AM
Asus M2N-SLI mb BIOS problem oldman Linux - Hardware 2 10-20-2006 11:56 AM
Mandrake 10.1 installation on Asus A8N SLI mobo aVIXk7 Mandriva 3 04-10-2006 01:13 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware > Slackware - Installation

All times are GMT -5. The time now is 10:10 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration