FedoraThis forum is for the discussion of the Fedora Project.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I have a server that recently had 4 x 300GB 15k RPM SCSI drives added to it.
They were added so that I can write my database to a highspeed array.
Unfortunately the host based RAID controller onboard the box is only geared for 1 RAID array and that is already in use for a RAID 1 stripe across 2 x 73GB SCSI drives for my host OS (Fedora Core 4 x86_64)
Ok, so now I have 1 x RAID 1 hardware controlled array, and I need some redundancy and high speed access.
My machines are Sunfire V40z's and have more than enough spare CPU and I/O to handle softare RAID so I am trying to configure the following: -
Ok, while in theory this is very simple, with Linux software RAID there are a few caveats...
1. you cannot use the raw drive (i.e. /dev/sdb) you have to create a partition on the drive and format it to be of partition type Linux RAID autodetect.
So I partition each drive, create 1 primary partition using all disk space, use fdisk to set the partion to Linux RAID Autodetect.
Doing this essentially gives the drives a "persistant superblock" so that the drives will be recognised at boot time.
Ok, /dev/md1 and /dev md2 are both easy and are recognised by my kernel/OS when booting and I have two functional RAID 1 mirrors with no data.
I now need to make a RAID 0 stripe across these two mirrors to get my RAID 1+0 (10) up.
mdadm -C /dev/md3 --level=raid0 --raid-devices=2 /dev/md1 /dev/md2
to create a RAID 0 array using the four disks in the aforementioned config.
This works greate, I can now create and mount my filesystem:
mount /dev/md3 /data/
Alright at this point I have almost 600GB of useable space in exactly the configuration I want.
My problem is this:
How do I make this persistant across boots?
only my two RAID 1 arrays are persistant across boots, no matter what I try the RAID 0 stripe fails.
I have tried scripting using mdadm to rebuild the array (assemble) but it just tells me that the drives are part of an existing array.
I have tried fdisking the /dev/md3 and creating a Linux RAID Autodetect partition /dev/md31 to get the persistant superblock happening but this fails on boot as /dev/md3 does not exist until Linux has booted to a point.
I think you can achieve this by using the /etc/raidtab file. Persistent superblocks are required to boot from a software raid, but for your particular setup raidtab could work fine.
Another option (my personal preference that is) is using LVM. You can add your RAID1 sets into a Volume Group (lineair or striped) and create Logical Volumes from it. This will give you more flexibility (e.g. not allocating all space yet so you can grow the volumes afterwards) and you can work with meaningful device names (e.g. /dev/oracle/db).
Beware if you should go for LVM and you create a striped Volume Group, you cannot expand this with more physical devices afterwards!
Ok, so I leave persistant superblocks to take care of /dev/md1 and /dev/md2 and set up a /etc/raidtab file with some detail about the RAID 0 stripe across the two RAID 1 mirrors.
I have considered using LVM on the RAID 0 stripe, that way I can add additional mirrors into the underlying array and then increase the size of the raid 0 array by adding new devices to the array. Then I simply increase the size of the LVM group.
for RAIDTAB, do I need the raidtool package installed?