LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 02-22-2012, 03:21 PM   #1
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
Installing Slackware 13.37 on 3 disk md raid10 without init.rd


Hi all

I want to share my experiment with installing linux on a 3 disk md raid10.

My rig:
Amd phenomII X4, 3.4 GHz
4 GB DDR2
2 x 1 TB Sata HDD /dev/sda,/dev/sdb
1 x 2 TB Sata HDD /dev/sdc

Why raid10:
There are two types of raid hardware and software. The hardware raid you find on motherboards is more a sort of software raid. I choose for software raid with the md driver because then it is possible to use raid10 with 3 disks and/or partitions, from which you may lose one disk without losing your system and you get better read performance with big files because the data is striped across the disks so you can read from multiple spindels. With hardware raid10 you need 4 disks,which are divided in 2 pairs. From each pair you may lose 1 disk.
So for more performance and more safety. Warning raid is no substitute for back-up. But there is a price. You lose half your disk space so with my setup two 1 TB disks and one 2 TB disk which I divided in two because raid10 needs equally sized disk/partitions.
I get a 3 TB raid10 which gives a usable space of 1.5 TB.

I used a Slackware current ISO which I put on a usb stick with unetbootin.

The boot order in BIOS was set to sdc. And I used the BIOS boot menu by pressing F8 to boot the usb stick.
The first thing I did was create two partitions on sdc. A small 30 MB boot partition which will not be a part of the raid10 array because lilo can not boot from a software raid10. And a second partition of 1 TB. After deleting all partitions from sda and sdb. I created the raid10 array.

Code:
bash-4.1# mdadm --create /dev/md_d0 --auto=yes --level=raid10 --bitmap=internal --layout=f2 --chunk=256 --raid-devices=3  --metadata=0.90 /dev/sda /dev/sdb /dev/sdc2
Here after, I created 3 partitions on the raid10 array with
Code:
fdisk /dev/md_d0
I aligned the partition on 3072 Bytes to be sector aligned and stride aligned. After some more reading I discovered that stride aligning is not necessary with raid10 so sector aligning with 1024 bytes should be enough. I created a 4 GB swap partition to get hibernate functioning in the future ;-) A 12 GB root partition and a 60 GB home partition.
Code:
Disk /dev/md_d0: 1499.8 GB, 1499837497344 bytes
2 heads, 4 sectors/track, 366171264 cylinders, total 2929370112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 786432 bytes
Disk identifier: 0xaddf2727

      Device Boot      Start         End      Blocks   Id  System
/dev/md_d0p1            6144     8404991     4199424   82  Linux swap
/dev/md_d0p2         8404992    33570815    12582912   83  Linux
/dev/md_d0p3        33570816   159399935    62914560   83  Linux
Then I ran setup. The only difference with a normal setup was that I choose the expert lilo setup. So that I could point lilo to /dev/sdc as boot device. And lilo would not run with /dev/md_d0p2 as root so for the moment I used /dev/sdc1 as root. When rebooting I gave as a parameter the real root.
Code:
root=/dev/md_d0p2
After finishing the setup and rebooting. It failed miserably.
After some more reading I added as kernel parameter
Code:
raid=part md=d0,/dev/sda,/dev/sdb,/dev/sdc2 root=/dev/md_d0p2
Now it failed less miserably, I got two array's md0 with sdc2 and md_d0 with sda and sdb but because it is a raid10 I could boot with 1 missing drive in /dev/md_d0. After searching through start up scripts and udev rules. I discovered that it was md that created the problem with it's autodetection. I stopped the array md0 and added sdc2 back to the array. I put the following append in lilo.conf and adjusted the root.
Code:
append=" raid=noautodetect md=d0,/dev/sda,/dev/sdb,/dev/sdc2"
root = /dev/md_d0p2
My Lilo.conf
Code:
# LILO configuration file
# generated by 'liloconfig'
#
# Start LILO global section
boot = /dev/sdc
compact        # faster, but won't work on all systems.
# Boot BMP Image.
# Bitmap in BMP format: 640x480x8
  bitmap = /boot/slack.bmp
# Menu colors (foreground, background, shadow, highlighted
# foreground, highlighted background, highlighted shadow):
  bmp-colors = 255,0,255,0,255,0
# Location of the option table: location x, location y, number of
# columns, lines per column (max 15), "spill" (this is how many
# entries must be in the first column before the next begins to
# be used.  We don't specify it here, as there's just one column.
  bmp-table = 60,6,1,16
# Timer location x, timer location y, foreground color,
# background color, shadow color.
  bmp-timer = 65,27,0,255
# Standard menu.
# Or, you can comment out the bitmap menu above and
# use a boot message with the standard menu:
#message = /boot/boot_message.txt
# Append any additional kernel parameters:
append=" raid=noautodetect md=d0,/dev/sda,/dev/sdb,/dev/sdc2 vt.default_utf8=1"
prompt
timeout = 50
# VESA framebuffer console @ 1024x768x64k
vga = 791
# End LILO global section
# Linux bootable partition config begins
image = /boot/vmlinuz
  root = /dev/md_d0p2
  label = Linux
  read-only  # Partitions should be mounted read-only for checking
# Linux bootable partition config ends
And now it worked flawlessly after running lilo.

info:
http://linux.die.net/man/8/mdadm
https://raid.wiki.kernel.org
http://spinics.net/lists/raid/maillist.html
 
Old 02-23-2012, 11:55 AM   #2
NyteOwl
Member
 
Registered: Aug 2008
Location: Nova Scotia, Canada
Distribution: Slackware, OpenBSD, others periodically
Posts: 512

Rep: Reputation: 139Reputation: 139
RAID 10 first pairs drives into RAID 1 arrays, and then stripes across the arrays. You can theoretically lose 1/2 of the drives and not lose the array if all the failed drives are each one of the drives in their 2-drive RAID 1 group. If any entire RAID 1 group fails, the entire array is lost.

While you can do seemingly do RAID 10 with three drives using your setup, if the single 2TB drive fails you lose the entire array. This is no safer than RAID 0.

You would be better off setting up RAID 0+1 where you stripe the data across the two 1TB drives in a RAID 0 array, and then mirror the array onto the 2TB drive in a RAID 1 array. That way you could lose any single drive and still recover. Or do it right and use 4 drives if you want RAID 10.
 
Old 02-23-2012, 02:50 PM   #3
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Original Poster
Rep: Reputation: 141Reputation: 141
If you had read the thread you would know that only half of the 2 TB drive is used for the raid array.md raid 10 offers a option to use 3 drives and create a real raid 10 array in which you can lose one of the drives. See http://en.wikipedia.org/wiki/Non-sta...nux_MD_RAID_10

Quote:
Linux MD RAID 10

The Linux kernel software RAID driver (called md, for "multiple device") can be used to build a classic RAID 1+0 array, but also as a single level[8] with some interesting extensions.[9]

The standard "near" layout, where each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID 10 arrangement, but it does not require that n evenly divide k. For example an n2 layout on 2, 3 and 4 drives would look like:

2 drives
--------
A1 A1
A2 A2
A3 A3
A4 A4
.. ..

3 drives
----------
A1 A1 A2
A2 A3 A3
A4 A4 A5
A5 A6 A6
.. .. ..

4 drives
--------------
A1 A1 A2 A2
A3 A3 A4 A4
A5 A5 A6 A6
A7 A7 A8 A8
.. .. .. ..

The 4-drive example is identical to a standard RAID-1+0 array, while the 3-drive example is a software implementation of RAID-1E. The 2-drive example is equivalent to RAID 1.
 
Old 02-27-2012, 09:23 PM   #4
NyteOwl
Member
 
Registered: Aug 2008
Location: Nova Scotia, Canada
Distribution: Slackware, OpenBSD, others periodically
Posts: 512

Rep: Reputation: 139Reputation: 139
I did read the post. Did you read mine? Using a single drive as part of a RAID 10 array removes most of the advantage of using RAID 10. Anyway...
 
Old 02-28-2012, 02:48 PM   #5
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Original Poster
Rep: Reputation: 141Reputation: 141
Yes I did and i did lose the 2 GB drive without losing the array it came up fine.
 
  


Reply

Tags
md, raid10, slackware


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
software RAID10 - does the disk order in mdadm matter ? banan.olek Linux - Server 4 09-06-2011 12:57 PM
Trouble installing slackware disk 2 Wolfie301 Linux - Newbie 19 12-01-2010 06:08 PM
Installing Slackware 10,2 on a SATA disk kamransoomro84 Slackware 10 09-12-2006 12:46 PM
Installing Slackware 10 from floppy disk znaya Slackware 5 01-09-2005 05:35 AM
Installing Slackware from zip disk. ZipLinux Slackware 7 02-19-2004 06:35 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 03:42 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration