Slackware - InstallationThis forum is for the discussion of installation issues with Slackware.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
There are a lot of threads and posts of this theme, but first of all I need a basic and fundamental hint: The board has a bulid-in RAID-Controller which is enabled in BIOS an the two SATA 400Gig-HD's are set up as a RAID1 by a RAID setup utility, which comes up before the OS is booting.
So I thought that this two discs look as one to the OS and are to set up as if so. Installation on /dev/sda was no problem, but the installation of LILO in the MBR did not work. After the reboot I get a "no system disc or disc error" failure an the os can only be booted from the installation DVD.
So, everything (as always..) is a little bit more tricky if you do it the first time. Any hints for documentation that fit my situation are welcome.
Faced a similar issue when installing on a Compaq DL360 G3
After installation, I manually installed grub (in extras) as it provides you with some more flexibility.
I had to edit /boot/grub/devices.map to have it change the disk order too.
after that,
# grub-install /dev/cciss/c0d0
(in my case! yours may be /dev/sda)
Also, make sure to check that your boot partition is set "bootable".
Most so called raid-cotrollers mounted on a consumer motherboard are not raid controllers at all. I dont know your model though. But during the install you can see if something is weird when dth drives are listed directly but no raid setups. You should turnm off the raid-mockup in BIOS and use the drives as they are. LILO is quite possible to set up for raid1 booting even if one of the drives get destroyed. It will simply jump to the next drive in it`s list.
Your post is a bit old though, maybe you have solved this already.
If not, check out this article on the issue.
I got RAID working on my Promise FastTrack controller but it was difficult. It's easier if you can compile the kernel on another computer and create a boot CD to install Linux. I'll describe the issues and how I got around them.
I had to run "dmraid" for my RAID sets to be recognized, since Promise has no RAID driver for kernel version 2.6 and the Linux driver doesn't support RAID. That means "dmraid" had to be compiled ahead of time against the kernel and libraries for Slackware 12.0. I also had to compile a library used by "dmraid". Those files can be copied to the ram drive from a floppy or CD after booting the Slackware installation CD.
After booting the installation CD, it is necessary to run "dmraid". Then Slackware can be installed to the correct devices under "/dev/mapper".
The "dmraid" program creates very long device names under /dev/mapper and that doesn't work well with the initialization scripts in Linux. I had to note the major and minor unit numbers for the mapper devices corresponding to the devices after running "dmraid".
Some of the Linux initialization occurs before UDEV starts and some occurs afterward. In order to get that to work I had to create additional devices under /dev while the installation CD was booted.
I created these devices.
/dev/sdr3 - Linux root partition
/dev/sdr6 - Linux swap partition
The device names are fake, I just picked those to use so that the initialization scripts could use simple device names. I had to set those to the major and minor unit numbers for the actual /dev/mapper devices.
Because I created the fake device names I could use "root=/dev/sdr3" in the GRUB menu.lst file and the "/etc/fstab" file. I also used "/dev/sdr6" for the swap space in the "/etc/fstab" file. Those all get used before UDEV starts.
Since UDEV doesn't create usable device names (by default) I had to add some UDEV rules to create the names. I created a file called "/etc/udev/rules.d/10-local.rules" with the following.
I had to use an "initrd" RAM disk image to run "dmraid" during Linux boot. In order to do that, I had to create a custom "init" script for the "initrd".
I copied the standard "init" script from the "/boot/initrd-tree" directory after using "mkinitrd". Then I added statements to load "dmraid".
...
# Find any dmraid detectable partitions
dmraid -ay
# Switch to real root partition:
echo 0x0100 > /proc/sys/kernel/real-root-dev
...
I copied the new "init" script back to "/boot/initrd-tree" and then ran "mkinitrd" again to create the ram disk with the modified script.
Everything that I've mentioned so far has to be done while the installation CD is booted. I used a few commands so that I could do the mkinitrd and do other things before booting Linux the first time.
chroot /mnt/tmp
mount -t proc proc /proc
mount -t sysfs sysfs /sys
After I finished doing all the necessary things including installing GRUB then I unmounted everything.
umount /sys
umount /proc
exit
LILO doesn't seem to work with mapper devices so I had to use GRUB instead of LILO. That meant compiling grub and then installing it.
It's easier to install GRUB if you make a GRUB boot floppy or CD so that you can use "native" mode. In my case, GRUB was in the third partition so I did this after booting a GRUB floppy.
root (hd0,2)
setup (hd0,2)
That works because "native" mode calls the BIOS and the RAID BIOS takes care of accessing the RAID array correctly. If you do that from Linux you have to make sure that the GRUB device names are correctly mapped to the "/dev/mapper" device names using the "/boot/grub/device.map" file.
(hd0,2) /dev/mapper/cryptic-device-name
The "dmraid" program creates some very strange device names that have to be used where I showed "cryptic-device-name". You have to use those names while installing Slackware, or create your own names temporarily under /dev using the correct major and minor unit numbers from the cryptic names.
All of this only works if you have a Linux SATA driver compatible with your RAID controller (non-raid mode). Luckily there was an "sata-proimse" driver and the latest verion supports the SATA and PATA ports on the controller.
One advantage to using this method over a proprietary RAID driver is that it works with the disks connected on any SATA controller compatible with Linux. That means the drives can be moved to a different SATA controller and read in the event of a motherboard failure. In my case there was no proprietary RAID driver available so I had no choice.
Acording to Erik_FL's reply,I should compile and run dmraid tool first if I want to install the Slackware12 on a ich9r based raid0 system.
I had tried that yesterday and failed. How could I compile and run dmraid after the slackware installation DVD boot? I have no idea about that. Could Someone just show me a instruction step by step?
Acording to Erik_FL's reply,I should compile and run dmraid tool first if I want to install the Slackware12 on a ich9r based raid0 system.
I had tried that yesterday and failed. How could I compile and run dmraid after the slackware installation DVD boot? I have no idea about that. Could Someone just show me a instruction step by step?
ps:Sorry about my English. :-)
You can't compile the "dmraid" program using the Slackware installation DVD. You will have to install Slackware on some other computer or a non-RAID hard disk on the same computer.
Another option is to use Microsoft Virtual PC or VirtualBox to install Slackware to a Windows virtual machine so that you can compile the files.
You can download the copy of the files that I compiled here. dmraid.tar
Extract the files to a folder.
tar -xvf dmraid.tar
Copy all the files except for "dmraid" to the "/lib" directory.
Copy the "dmraid" file to the "/sbin" directory.
Now you should be able to detect the devices.
dmraid -ay
Remember, if you are doing this with the Slackware boot disc during installation, you will be copying into a ram disk. After you can mount the RAID array, then copy the files to the RAID array where you installed Slackware.
If you still have problems installing, download this boot CD image that I made. It has "dmraid" and the mapper device. bootcd.bin
The slackware 12 installation dvd gives you mdadm tools. With these you can create, assemble and turn on raid disk arrays. Once that is done you do not even have to reboot before proceeding with slackware`s installer program. Wich will gladly accept installing to /dev/md0, /dev/md1, etc.
I have read many posts where it is stated that you must install in a regular way first and then sort of turn the finished installation over to a raided disksystem. I never had to do any of that since ver.11.
1. use fdisk/cfdisk from the inst.dvd to create whatever raiddevices you like. (linux raid autodetect)
2. use fdisk to do it again, on the second disk....
3. use the mdadm toolset to setup and turn on the raiddevices.
4. install good old slacketislack.
But of course, I am talking about linux software raid.....
The slackware 12 installation dvd gives you mdadm tools. With these you can create, assemble and turn on raid disk arrays. Once that is done you do not even have to reboot before proceeding with slackware`s installer program. Wich will gladly accept installing to /dev/md0, /dev/md1, etc.
I have read many posts where it is stated that you must install in a regular way first and then sort of turn the finished installation over to a raided disksystem. I never had to do any of that since ver.11.
1. use fdisk/cfdisk from the inst.dvd to create whatever raiddevices you like. (linux raid autodetect)
2. use fdisk to do it again, on the second disk....
3. use the mdadm toolset to setup and turn on the raiddevices.
4. install good old slacketislack.
But of course, I am talking about linux software raid.....
This thread is in regard to "fake hardware RAID" implemented by the BIOS software and proprietary drivers (Intel Array Management software). You are correct that Linux RAID will do most of the same functions. There are only a few reasons to use "fake hardware RAID".
to boot from RAID
to allow multiple operating systems to access the same RAID arrays
your OS (XP Home, Vista Home, DOS) doesn't support software RAID
From a performance standpoint, the two approaches (Linux software and fake hardware) are about the same. Fake hardware RAID controllers are just ordinary SATA or IDE controllers with BIOS extensions (firmware) to allow booting and formatting the RAID arrays on the disks. They use an operating system driver to make each RAID array on multiple disks appear to be one hard disk device. The driver still has to do as many I/O transfers as software RAID because the disk hardware looks like individual disks.
In some cases it is possible to boot from a Linux mirror set, but it usually isn't possible with a stripe set. Even with a mirror set, booting occurs from only one of the drives unless the BIOS can be configured to boot from the other disk in the array. Fake hardware RAID provides for booting all kinds of arrays with redundancy for mirrors.
Fake hardware RAID controllers use proprietary RAID formats, but there are often drivers to support multiple operating systems. Using fake hardware RAID allows more than one OS to access partitions in the RAID array. The Linux "dmraid" utilitiy allows non-Linux RAID arrays to be recognized and the appropriate devices configured using standard disk drivers and the device-mapper. Other operating systems have access because they have drivers that understand the proprietary RAID arrays. That's true even if the other operating systems don't support software RAID since the driver contains the RAID functions.
Fake hardware RAID has its problems. If there is no proprietary RAID driver for an OS and no facility like "dmraid" to recognize metadata for arrays then an operating system can't access the RAID arrays. If the hardware fails, it may be impossible to access the data without purchasing a compatible RAID controller. The "dmraid" program opens the door for other hard disk controllers to be used for data recovery. Proprietary drivers may have more bugs and be less reliable.
I certainly agree with you on the benefit of having the array(s) accessile from more than one OS. But as far as booting goes i would say that most hardware can be configured to boot from the second disk if the first one has failed. And linux-bootmanagers can be configured for this as well.
Booting from raid0 is not a good idea anyway, is it? Why would you want to?
I can see why you have tings installed on raid0, but not booting off it.
I read an article some time ago that i unfortunately hav no link to here and now. It did a rather thorough walk-through of three approaches: the fake-raid solution, software-raid, and a real-deal controller that is likely to cost more alone than the rest of your box. No need to wonder which was best, the expensive hardware won every aspect of course. But i dont recall reading about the ability to access the same data from several systems. Perhaps the fake-raid chipset deal isn`t so bad after all....
Anyways, my webserver has been running for five years now, and i have had three disk crashes. None of them gave me any worries other that issuing mdadm commands and tossing in a new drive in the drawer.
I guess either way could work
I had installed the slackware12 on the raid0 driven by the dmraid. But how could I boot from it use the grub instead of the Lilo.
In order to boot from a RAID 0 array, it has to be created and used on a hardwre RAID controller (or fake hardware RAID controller). Those have a BIOS ROM that loads during the computer BIOS startup. The BIOS ROM allows booting from a RAID array. GRUB uses the BIOS to read the Linux kernel into memory.
Extract the files from the tarball to a directory under /usr/src and then follow the instructions to make grub.
Copy the required boot files to /boot/grub and then create a "menu.lst" file.
Code:
default 0
timeout 5
title Linux
root (hd0,2)
kernel /boot/vmlinuz vga=773 root=/dev/sdr1 load_ramdisk=1 ramdisk_size=4096
initrd /boot/initrd.gz
title Windows XP
rootnoverify (hd0,0)
chainloader +1
NOTE: Change "/dev/sdr1" to the correct root device.
Use the command "info grub" to find out how to make a GRUB boot floppy, or a GRUB boot CD. Boot grub from floppy or CD and press C to enter the command mode of GRUB.
Use these commands to install GRUB to your master boot record or partition boot sector.
To install to MBR:
root (hd0,0)
setup (hd0)
To install to partition boot sector:
root (hd0,0)
setup (hd0,0)
If your Linux partition containing GRUB is in some other location then change "(hd0,0)".
First Hard Disk, First Primary partition - (hd0,0)
First Hard Disk, Second Primary partition - (hd0,1)
First Hard Disk, Third Primary partition - (hd0,2)
First Hard Disk, Fourth Primary partition - (hd0,3)
First Hard Disk, First Logical partition - (hd0,4)
First Hard Disk, Second Logical partition - (hd0,5)
I repeated some of the information that I already posted with a bit more detailed instructions below.
In order for Linux to boot from a "dmraid" device, it is necessary to use an "initrd" RAM disk to run "dmraid". First, use "mkinitrd" to create "/boot/initrd.gz". Next, edit the file "/boot/initrd-tree/init" and add the line to run "dmraid" as shown below in bold. After editing the file, use "mkinitrd" again with no options or parameters to create "initrd.gz" again using the modified "init".
Code:
INITRD=`cat /initrd-name`
ROOTDEV=`cat /rootdev`
ROOTFS=`cat /rootfs`
LUKSDEV=`cat /luksdev`
# Mount /proc and /sys:
mount -n proc /proc -t proc
mount -n sysfs /sys -t sysfs
# Load kernel modules:
if [ ! -d /lib/modules/`uname -r` ]; then
echo "No kernel modules found for Linux `uname -r`."
elif [ -x ./load_kernel_modules ]; then # use load_kernel_modules script:
echo "${INITRD}: Loading kernel modules from initrd image:"
. ./load_kernel_modules
else # load modules (if any) in order:
if ls /lib/modules/`uname -r`/*.*o 1> /dev/null 2> /dev/null ; then
echo "${INITRD}: Loading kernel modules from initrd image:"
for module in /lib/modules/`uname -r`/*.*o ; do
insmod $module
done
unset module
fi
fi
# Initialize LVM:
if [ -x /sbin/vgscan ]; then
/sbin/vgscan --mknodes --ignorelockingfailure
sleep 10
/sbin/vgchange -ay --ignorelockingfailure
fi
# Make encrypted partitions available:
# The useable device will be under /dev/mapper/
if [ -x /sbin/cryptsetup ]; then
if /sbin/cryptsetup isLuks ${LUKSDEV} ; then
/sbin/cryptsetup luksOpen ${LUKSDEV} $ROOTDEV </dev/systty >/dev/systty 2>&1
ROOTDEV="/dev/mapper/${ROOTDEV}"
fi
fi
# Find any dmraid detectable partitions
dmraid -ay
# Switch to real root partition:
echo 0x0100 > /proc/sys/kernel/real-root-dev
mount -o ro -t $ROOTFS $ROOTDEV /mnt
if [ ! -r /mnt/sbin/init ]; then
echo "ERROR: No /sbin/init found on rootdev (or not mounted). Trouble ahead."
exit 1
fi
unset ERR
umount /proc
umount /sys
echo "${INITRD}: exiting"
exec switch_root /mnt /sbin/init $@
The "dmraid" program creates very long device names under "/dev/mapper". Also, those devices don't exist in "/dev" of the Linux root partition. I found that it was easier to create some fake device names such as "/dev/sdr1" using the correct major and minor unit numbers corresponding to the names under "/dev/mapper". Only two permanent names are required, one for the root partition and one for the swap partition. Those have to be created when UDEV isn't running because they are used before UDEV starts.
Once UDEV starts you also need to have names for your disks. I created a file called "/etc/udev/rules.d/10-local.rules" to create some fake device names.
By using the fake device names, the "/etc/fstab" file can refer to the partitions. It's very important that the permanent names created for the root and swap partitions are the same as the names created later by UDEV. The "/etc/fstab" file is used before UDEV starts and also after UDEV starts.
There is another problem. How could I compile the grub when I have just installed the slackware12 system. My problem is I could't make the system startup when I finished the installation.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.