SlackwareThis Forum is for the discussion of Slackware Linux.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I assembled a new computer from Shuttle with two 250GB sata barracuda drives. Before doing anything I went into the BIOS and told it that the two drives were to be in Raid as the manaul specified. I then created the Raid volume as specified. I believe it's working because in the boot order now the raid volume shows up instead of the individual drives.
So anyways I installed Slackware 10.2 using the sata.i kernel. After the install I am unable to boot the OS, I don't even make it to LILO. It just fills the screen with "9A ". If I go back into the bios and disable the raid so it boots the first sata drive I make it to lilo. I haven't tested booting Slackware but I'm sure it will work. I don't want the contents of the two drives to be different since I'm trying to keep them mirrored. Was there something I was supposed to do for Raid to work with linux? Seems pretty weird since it's not even making it to the bootloader. Any help would be much appreciated.
I also tried using the test26.s kernel and the same exact thing happens.
Last edited by mustangfanatic; 04-12-2006 at 06:55 PM.
The boot process takes place in two stages. The first stage loader is a single
sector, and is loaded by the BIOS or by the loader in the MBR. It loads the multi-
sector second stage loader, but is very space limited. When the first stage loader
gets control, it types the letter "L"; when it is ready to transfer control to the
second stage loader it types the letter "I". If any error occurs, like a disk read
error, it will put out a hexadecimil error code, and then it will re-try the opera-
tion. All hex error codes are BIOS return values, except for the lilo-generated
40, 99 and 9A. A partial list of error codes follows:
00 no error
01 invalid disk command
02 address mark not found
03 disk write-protected
04 sector not found
06 floppy disk removed
08 DMA overrun
0A bad sector flag
0B bad track flag
20 controller failure
40 seek failure (BIOS)
40 cylinder>1023 (LILO)
99 invalid second stage index sector (LILO)
9A no second stage loader signature (LILO)
AA drive not ready
FF sense operation failed
Error code 40 is generated by the BIOS, or by LILO during the conversion of a lin-
ear (24-bit) disk address to a geometric (C:H:S) address. On older systems which
do not support lba32 (32-bit) addressing, this error may also be generated. Errors
99 and 9A usually mean the map file (-m or map=) is not readable, likely because
LILO was not re-run after some system change, or there is a geometry mis-match
between what LILO used (lilo -v3 to display) and what is actually being used by the
BIOS (one of the lilo diagnostic disks, available in the source distribution, may
be needed to diagnose this problem).
When the second stage loader has received control from the first stage, it prints
the letter "L", and when it has initialized itself, including verifying the
"Descriptor Table" - the list of kernels/others to boot - it will print the letter
"O", to form the full word "LILO", in uppercase.
All second stage loader error messages are English text, and try to pinpoint, more
or less successfully, the point of failure.
bruce@silas:~$ less ../Slackware-HOWTO
sata.i This is a version of bare.i with support for SATA
controllers made by Promise, Silicon Image, SiS,
ServerWorks / Apple K2, VIA, and Vitesse.
raid.s This is a kernel with support for some hardware SCSI
and ATA RAID controllers. The install disks now have
preliminary support for these controllers as well. The
drivers included are:
AMI MegaRAID 418, 428, 438, 466, 762, 490 and 467 SCSI
host adapters. (use scsi2.s for newer models)
Compaq Smart Array controllers.
Compaq Smart Array 5xxx controllers.
IBM ServeRAID hardware RAID controllers.
LSI Logic Fusion(TM) MPT devices (not really RAID, but
added since there was room for this driver here)
Mylex DAC960, AcceleRAID, and eXtremeRAID controllers.
Many of these controllers will require some degree of
do-it-yourself setup before and/or after installation.
scsi.s This is a SCSI kernel with support for various
controllers. Note that this disk does not include
Adaptec support any longer -- you must use the adaptec.s
kernel for that.
This disk supports these SCSI controllers:
AM53/79C974 PCI SCSI support
BusLogic SCSI support
EATA ISA/EISA/PCI (DPT and generic EATA/DMA-compliant
Initio 91XXU(W) and Initio 91XXU(W) support
SYM53C8XX Version 2 SCSI support
Qlogic ISP SCSI support
Qlogic QLA 1280 SCSI support
scsi2.s This is a SCSI kernel with support for various
This disk supports these SCSI controllers:
AdvanSys SCSI support (supports all AdvanSys SCSI
controllers, including some SCSI cards included with
HP CD-R/RW drives, the Iomega Jaz Jet SCSI controller,
and the SCSI controller on the Iomega Buz multimedia
ACARD 870U/W SCSI host adapter support
AMI MegaRAID (newer models)
Compaq Fibre Channel 64-bit/66Mhz HBA support
Domex DMX3191D SCSI Host Adapters
DTC 3180/3280 SCSI Host Adapters
Future Domain 16xx SCSI/AHA-2920A support
NCR53c7,8xx SCSI support
NCR53C8XX SCSI support
scsi3.s This is a SCSI kernel with support for various
This disk supports these SCSI controllers:
Western Digital 7000FASST SCSI support
Always IN2000 SCSI support
Intel/ICP (former GDT SCSI Disk Array) RAID
PCI2000I EIDE interface card
PCI2220i EIDE interface card
PSI240i EIDE interface card
Qlogic FAS SCSI support
QLogic ISP FC (ISP2100 SCSI-FCP) support
Seagate ST01/ST02, Future Domain TMC-885/950 SCSI
SYM53c416 SCSI host adapter
UltraStor 14F, 24F and 34F SCSI-2 host adapters
Workbit NinjaSCSI-32Bi/UDE support
There is raid kernel is Slackware Current. You could try this too: ftp://ftp.slackware.com/pub/slackwar...ernels/raid.s/. Wen using raid, check the option in bios to enable ide identification for sata. If it works, you could disable it and enable native sata later.
Last edited by Alien_Hominid; 04-13-2006 at 02:24 PM.
From what I read it turns out that the Raid on my my motherboard isn't hardware raid. Turns out that it will only work with windows. So instead of using raid I'm just going to use cron to make nightly backups for now.
Yes, Linux definitely does software RAID; with IDE or SATA, it's on par performance-wise with "hardware RAID" solutions. I put that in quotes because most onboard RAID is really quasi-hardware, anyway, and is tied to Windows (like the Intel ICH5R/ICH6R/ICH7R controllers).
That said, I've done software RAID with SCSI drives in Slackware 10.1 just for fun - and I did RAID 0, RAID 1, and RAID 5 (had 10 SCSI 10GB drives I inherited Works great, lasts long time.
Yes, Raid 1 works well with good performance and it has nothing to do with the bios. I just installed 14 boxes in this config yesterday. Here is what I did. (You must do this in "expert config" mode of the partition setup)
On drive "b":
1. boot partition (ext 3, but NO mount point)
2. swap partition (swap, but no mount point)
3. "Linux Raid partition"
5. "Linux Raid Partition"
Then click on Raid, new, and add the 2 #3 partitions, format them as reiser fs with a mount point of /
Again, click on Raid, new, and add the 2 #5 partitions, format them as resierfs with a mount point of /usr
This worked fine for me, EXCEPT on some of the installs, the install would seem to "hang". It really wasn't hung, it just appeared to be "stuck" at a certain percentage (Percentage when stuck varied from machine to machine). But, if you let it go, it is fine.
Appears that the install gets into a situation where it has to resync the /usr partition and that holds up the install. (Took me three install tries to realize this - damn!)