LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 01-31-2011, 07:56 AM   #1
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Rep: Reputation: 50
Slackware64 13.1 raid-0 system


Hi, I have a new computer with 3 disks connected in a raid-0 system with two arrays. I create the arrays using Intel (Intel Rapid Storage Technology) utility, press crtl+I during boot.

I install windows without a problem, is running just fine.
I install Slackware64 -current in a device named /dev/md126p5 and it went well.

My problem was with installing Lilo. It gave me a raid fatal error.

I'm I doing something wrong?
 
Old 01-31-2011, 10:46 AM   #2
Darth Vader
Senior Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 2,727

Rep: Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247
You should use a RAID1 partition, with metadata version 0.90, for the /boot partition. LILO and GRUB can work only in those RAID1 partitions.
 
Old 01-31-2011, 11:13 AM   #3
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
Thanks for the reply, but the main idea for my system is to have a greater performance.
Create a array raid1 was not in my idea.

I have search for information about raid and Linux, can some share experience.
For instance I think that my raid system is a firmware raid.
But there exists software raid and hardware raid.

In Intel page its said to use the intel program at boot to create the arrays and Linux using mdadm will support the arrays.

But in some wikis like Linux Raid, Gentoo Software Raid and Gentoo Bios Raid have different methods.

What is best software raid or bios raid?
 
Old 01-31-2011, 11:27 AM   #4
Darth Vader
Senior Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 2,727

Rep: Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247
Like I said, you need a little (1-4GB) RAID1 partition, mounted on /boot, and LILO installed on this partition, to be able to successfully boot your RAID0/RAID10/RAID5 partition and operating system.

As final note, LILO and GRUB can boot ONLY from a RAID1 partition.

Last edited by Darth Vader; 01-31-2011 at 11:29 AM.
 
Old 01-31-2011, 03:01 PM   #5
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
There is some reason for that? Here can I find information about lilo and raid?
Here in this page I have information about lilo and raid1.

It's needed to have the two drivers specified in lilo. Can't I pass /dev/mdxxx?

And the other question is best software Raid or bios fake Raid?

I supposed that the chipset raid could have a higher performance.
 
Old 01-31-2011, 03:57 PM   #6
mRgOBLIN
Slackware Contributor
 
Registered: Jun 2002
Location: New Zealand
Distribution: Slackware
Posts: 999

Rep: Reputation: 231Reputation: 231Reputation: 231
You'll need am mdadm version 3.x.x for this support.

Software raid works very well and from my experience actually works better than BIOS level (Not pure hardware) RAID.
BIOS level raid is probably the best option if you need to share data between your Linux and Windows partitions.

This thread should interest you.

http://www.linuxquestions.org/questi...anager-807930/
 
Old 01-31-2011, 03:57 PM   #7
Darth Vader
Senior Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 2,727

Rep: Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247
1. There is some reason for that?

Yes. LILO don't known how to assemble the raid devices from fake RAID or software RAID. LILO is not a kernel with device drivers, only a bootloader and work in REAL MODE, like old good MS-DOS.

The RAID1 is a special case, because every partition used is a identical copy. Really, LILO see only the /dev/sda1 or /dev/sdb1, depending which device is booted.

2. Here can I find information about lilo and raid?

How about README_RAID.TXT from Slackware installation kit?

3. And the other question is best software Raid or bios fake Raid?

Technically, the Linux software RAID always get best performances than the BIOS fake RAIDs. And it is more versatile. Of course, the best results have the Hardware RAID, with dedicated raid cards.

Here you have my /etc/lilo.conf, used to boot a partitionable RAID0 array, using a RAID1 partition for /boot.

Code:
# LILO configuration file
# generated by 'liloconfig'
#
# Start LILO global section

lba32 # Allow booting past 1024th cylinder with a recent BIOS

# Append any additional kernel parameters:
append="console=tty1 vt.default_utf8=1 video=1280x1024-24@60.0"

boot = /dev/md0

map = /boot/boot.map

raid-extra-boot = mbr

# default = Linux-generic

# Boot BMP Image.
# Bitmap in BMP format: 640x480x8
#bitmap = /boot/slack.bmp
# Menu colors (foreground, background, shadow, highlighted
# foreground, highlighted background, highlighted shadow):
#bmp-colors = 255,0,255,0,255,0
# Location of the option table: location x, location y, number of
# columns, lines per column (max 15), "spill" (this is how many
# entries must be in the first column before the next begins to
# be used. We don't specify it here, as there's just one column.
#bmp-table = 60,6,1,16
# Timer location x, timer location y, foreground color,
# background color, shadow color.
#bmp-timer = 65,27,0,255

# Standard menu.
# Or, you can comment out the bitmap menu above and
# use a boot message with the standard menu:
message = /boot/boot_message.txt

# Wait until the timeout to boot (if commented out, boot the
# first entry immediately):
prompt

# Timeout before the first entry boots.
# This is given in tenths of a second, so 600 for every minute:
timeout = 100

# Override dangerous defaults that rewrite the partition table:
change-rules
reset

# VESA framebuffer console @ 1280x1024x64k
vga=794

# Normal VGA console
# vga = normal
# VESA framebuffer console @ 1024x768x64k
# vga=791
# VESA framebuffer console @ 1024x768x32k
# vga=790
# VESA framebuffer console @ 1024x768x256
# vga=773
# VESA framebuffer console @ 800x600x64k
# vga=788
# VESA framebuffer console @ 800x600x32k
# vga=787
# VESA framebuffer console @ 800x600x256
# vga=771
# VESA framebuffer console @ 640x480x64k
# vga=785
# VESA framebuffer console @ 640x480x32k
# vga=784
# VESA framebuffer console @ 640x480x256
# vga=769

# End LILO global section

# Linux bootable partition config begins
image = /boot/vmlinuz-2.6.37-bigmem64
    root = /dev/md1p1
    label = LX2.6.37-BM64
    initrd = /boot/initrd-2.6.37-bigmem64.gz
read-only
# Linux bootable partition config ends

# Linux bootable partition config begins
image = /boot/vmlinuz-2.6.36.3-bigmem64
    root = /dev/md1p1
    label = LX2.6.36.3-BM64
    initrd = /boot/initrd-2.6.36.3-bigmem64.gz
read-only
# Linux bootable partition config ends

# Linux bootable partition config begins
image = /boot/vmlinuz-generic-smp-2.6.35.10-smp
    root = /dev/md1p1
    label = Linux-generic
    initrd = /boot/initrd-2.6.35.10-smp.gz
read-only
# Linux bootable partition config ends
In my setup using Software RAID, /dev/md0 is the RAID1 array, using the 0.90 metadata version, and /dev/md1 is the partitionable RAID0 array, using 1.2 metadata version. Of course, /dev/md1p1 is 1st partition from the partitionable array.

Also, I use an initrd to assemble the RAID arrays, before the real system boot.
 
1 members found this post helpful.
Old 01-31-2011, 04:35 PM   #8
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Grub, unlike LILO, can use a RAID0 array to boot, you'll need an initrd to setup the array though. In fact, I have Slackware 13.1 dual booting with WinXP off of such an array.
 
1 members found this post helpful.
Old 01-31-2011, 05:19 PM   #9
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
Quote:
Originally Posted by Darth Vader View Post
1. There is some reason for that?

Yes. LILO don't known how to assemble the raid devices from fake RAID or software RAID. LILO is not a kernel with device drivers, only a bootloader and work in REAL MODE, like old good MS-DOS.

The RAID1 is a special case, because every partition used is a identical copy. Really, LILO see only the /dev/sda1 or /dev/sdb1, depending which device is booted.

2. Here can I find information about lilo and raid?

How about README_RAID.TXT from Slackware installation kit?

3. And the other question is best software Raid or bios fake Raid?

Technically, the Linux software RAID always get best performances than the BIOS fake RAIDs. And it is more versatile. Of course, the best results have the Hardware RAID, with dedicated raid cards.

Here you have my /etc/lilo.conf, used to boot a partitionable RAID0 array, using a RAID1 partition for /boot.

Code:
# LILO configuration file
# generated by 'liloconfig'
#
# Start LILO global section

lba32 # Allow booting past 1024th cylinder with a recent BIOS

# Append any additional kernel parameters:
append="console=tty1 vt.default_utf8=1 video=1280x1024-24@60.0"

boot = /dev/md0

map = /boot/boot.map

raid-extra-boot = mbr

# default = Linux-generic

# Boot BMP Image.
# Bitmap in BMP format: 640x480x8
#bitmap = /boot/slack.bmp
# Menu colors (foreground, background, shadow, highlighted
# foreground, highlighted background, highlighted shadow):
#bmp-colors = 255,0,255,0,255,0
# Location of the option table: location x, location y, number of
# columns, lines per column (max 15), "spill" (this is how many
# entries must be in the first column before the next begins to
# be used. We don't specify it here, as there's just one column.
#bmp-table = 60,6,1,16
# Timer location x, timer location y, foreground color,
# background color, shadow color.
#bmp-timer = 65,27,0,255

# Standard menu.
# Or, you can comment out the bitmap menu above and
# use a boot message with the standard menu:
message = /boot/boot_message.txt

# Wait until the timeout to boot (if commented out, boot the
# first entry immediately):
prompt

# Timeout before the first entry boots.
# This is given in tenths of a second, so 600 for every minute:
timeout = 100

# Override dangerous defaults that rewrite the partition table:
change-rules
reset

# VESA framebuffer console @ 1280x1024x64k
vga=794

# Normal VGA console
# vga = normal
# VESA framebuffer console @ 1024x768x64k
# vga=791
# VESA framebuffer console @ 1024x768x32k
# vga=790
# VESA framebuffer console @ 1024x768x256
# vga=773
# VESA framebuffer console @ 800x600x64k
# vga=788
# VESA framebuffer console @ 800x600x32k
# vga=787
# VESA framebuffer console @ 800x600x256
# vga=771
# VESA framebuffer console @ 640x480x64k
# vga=785
# VESA framebuffer console @ 640x480x32k
# vga=784
# VESA framebuffer console @ 640x480x256
# vga=769

# End LILO global section

# Linux bootable partition config begins
image = /boot/vmlinuz-2.6.37-bigmem64
    root = /dev/md1p1
    label = LX2.6.37-BM64
    initrd = /boot/initrd-2.6.37-bigmem64.gz
read-only
# Linux bootable partition config ends

# Linux bootable partition config begins
image = /boot/vmlinuz-2.6.36.3-bigmem64
    root = /dev/md1p1
    label = LX2.6.36.3-BM64
    initrd = /boot/initrd-2.6.36.3-bigmem64.gz
read-only
# Linux bootable partition config ends

# Linux bootable partition config begins
image = /boot/vmlinuz-generic-smp-2.6.35.10-smp
    root = /dev/md1p1
    label = Linux-generic
    initrd = /boot/initrd-2.6.35.10-smp.gz
read-only
# Linux bootable partition config ends
In my setup using Software RAID, /dev/md0 is the RAID1 array, using the 0.90 metadata version, and /dev/md1 is the partitionable RAID0 array, using 1.2 metadata version. Of course, /dev/md1p1 is 1st partition from the partitionable array.

Also, I use an initrd to assemble the RAID arrays, before the real system boot.
Like I'm using fake raid, do I need a initrd special?

And another question. I have 3 disk. One array and two volumes with Raid0.
Can I make another volume with raid1 with the same disks?

Last edited by mlpa; 01-31-2011 at 05:25 PM.
 
Old 01-31-2011, 05:25 PM   #10
Darth Vader
Senior Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 2,727

Rep: Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247Reputation: 1247
Quote:
Originally Posted by mlpa View Post
Like I'm using fake raid, do I need a initrd special?
Yup! Of course you need a initrd, to assemble the arrays before the real system boot on your RAID0.

Here is a little script, mkinitrd-2.6.35.10-smp.sh, used by me, to generate a properly initrd.

Code:
#!/bin/sh

KVERSION=2.6.35.10-smp
ROOTFS=ext4
ROOTDEV=/dev/md1p1

mkinitrd -c -u -R -k $KVERSION -f $ROOTFS -r $ROOTDEV -m jbd2:mbcache:ext4 -o /boot/initrd-$KVERSION.gz
 
2 members found this post helpful.
Old 01-31-2011, 05:50 PM   #11
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
And for the other question, I know anything that can help?
 
Old 02-01-2011, 06:59 PM   #12
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
Quote:
Originally Posted by Darth Vader View Post
Yup! Of course you need a initrd, to assemble the arrays before the real system boot on your RAID0.

Here is a little script, mkinitrd-2.6.35.10-smp.sh, used by me, to generate a properly initrd.

Code:
#!/bin/sh

KVERSION=2.6.35.10-smp
ROOTFS=ext4
ROOTDEV=/dev/md1p1

mkinitrd -c -u -R -k $KVERSION -f $ROOTFS -r $ROOTDEV -m jbd2:mbcache:ext4 -o /boot/initrd-$KVERSION.gz
Just one question, I have software raid or fake raid?
 
Old 02-01-2011, 07:01 PM   #13
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
Quote:
Originally Posted by mostlyharmless View Post
Grub, unlike LILO, can use a RAID0 array to boot, you'll need an initrd to setup the array though. In fact, I have Slackware 13.1 dual booting with WinXP off of such an array.
Can you confirm that this tutorial is close to what you have?
 
Old 02-02-2011, 07:56 AM   #14
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Close enough, but I'm running Slackware so the procedure to set it up is different, and I had Windows installed first. Once it's setup, it's relatively easy to change distro. I had Slackware 12.1 when I set it up, tried OpenSUSE 11.3 and now am on Slackware 13.1; I didn't have to repartition, or reinstall Windows...

Last edited by mostlyharmless; 02-02-2011 at 08:18 AM. Reason: More info
 
Old 02-02-2011, 08:55 AM   #15
mlpa
Member
 
Registered: May 2008
Location: Aveiro
Distribution: Slackware
Posts: 542

Original Poster
Rep: Reputation: 50
Quote:
Originally Posted by mostlyharmless View Post
Close enough, but I'm running Slackware so the procedure to set it up is different, and I had Windows installed first. Once it's setup, it's relatively easy to change distro. I had Slackware 12.1 when I set it up, tried OpenSUSE 11.3 and now am on Slackware 13.1; I didn't have to repartition, or reinstall Windows...
Can you share your method?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Slackware64 13.1 DotHill SATA RAID Bravo_97 Slackware 4 11-15-2010 01:34 PM
[SOLVED] Fresh Slackware64-current RAID-1 + LVM + LUKS: pvcreate not working gargamel Slackware 6 03-23-2010 02:24 PM
How can I convert RAID 1 to NON-RAID system in Cent OS 5 system? jiltin Linux - Server 6 01-09-2010 10:31 PM
I've deleted /bin by mistake. Now I can't boot my Slackware64 13 system. glore2002 Slackware 16 09-06-2009 07:39 PM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 06:07 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration