Converting existing RAID1 (where /root, /swap, /usr, and /var reside) to RAID10
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Note that md0 is a 4-way mirror of the devices /dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1, while md1 is presently just a simple RAID1 mirror of devices /dev/sda2 and /dev/sdb2. The remaining devices /dev/sdc2 and /dev/sdd2 are presently not involved in the configuration. I left these out when I originally set my system up because I couldn't figure out how to get them into RAID10 (or RAID1+0, if you like) with the other two devices /dev/sda2 and /dev/sdb2. It's my plan now to finally complete the set-up and create the RAID10 device md10 from the existing mirror md1 and a new one md2, the last which will be made up of /dev/sdc2 and /dev/sdd2. Please note that the system and root partitions are mounted and active already in RAID, so whatever changes I need to make that temporarily deactivates the RAID device that contains them will make the system unavailable.
Here is some more information.
The RAID device md0 contains /boot :
Code:
sp@barbaro:~$ sudo fdisk -l /dev/md0
Disk /dev/md0: 197 MB, 197263360 bytes
2 heads, 4 sectors/track, 48160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
... and md1 contains the rest of the filesystems, including root, swap, etc.:
Code:
sp@barbaro:~$ sudo fdisk -l /dev/md1
Disk /dev/md1: 159.8 GB, 159800623104 bytes
2 heads, 4 sectors/track, 39013824 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
The entire system on md1 is in LVM(2) in the following way:
Code:
sp@barbaro:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name vol_grp
PV Size 148.83 GB / not usable 1.75 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 38099
Free PE 0
Allocated PE 38099
PV UUID 8394Rm-AU72-6RZ7-5RQG-NWdR-R74D-wuREIu
Code:
sp@barbaro:~$ sudo lvscan
ACTIVE '/dev/vol_grp/swap' [2.00 GB] inherit
ACTIVE '/dev/vol_grp/root' [3.50 GB] inherit
ACTIVE '/dev/vol_grp/home' [10.00 GB] inherit
ACTIVE '/dev/vol_grp/var-log' [2.00 GB] inherit
ACTIVE '/dev/vol_grp/tmp' [5.00 GB] inherit
ACTIVE '/dev/vol_grp/usr' [20.00 GB] inherit
ACTIVE '/dev/vol_grp/var' [20.00 GB] inherit
ACTIVE '/dev/vol_grp/usr-local' [5.00 GB] inherit
ACTIVE '/dev/vol_grp/store' [81.32 GB] inherit
... such that the current situation looks like this:
sp@barbaro:~$ sudo fdisk -l
Disk /dev/sda: 160.0 GB, 160000000000 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000ccf7b
Device Boot Start End Blocks Id System
/dev/sda1 * 1 24 192748+ fd Linux raid autodetect
/dev/sda2 25 19452 156055410 fd Linux raid autodetect
Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d1b00
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 24 192748+ fd Linux raid autodetect
/dev/sdb2 25 19457 156095572+ fd Linux raid autodetect
Disk /dev/sdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002dc62
Device Boot Start End Blocks Id System
/dev/sdc1 * 1 24 192748+ fd Linux raid autodetect
/dev/sdc2 25 19457 156095572+ fd Linux raid autodetect
Disk /dev/sdd: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x1c9c2d38
Device Boot Start End Blocks Id System
/dev/sdd1 * 1 24 192748+ fd Linux raid autodetect
/dev/sdd2 25 19457 156095572+ fd Linux raid autodetect
Disk /dev/md0: 197 MB, 197263360 bytes
2 heads, 4 sectors/track, 48160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1: 159.8 GB, 159800623104 bytes
2 heads, 4 sectors/track, 39013824 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
I'm using ReiserFS across the board on all mounts, from /boot to /store.
Whew!
Just to re-iterate, what I'd like to do is to involve the remaining, unused drives /dev/sdc2 and /dev/sdd2, RAID them up in a RAID1 device (/dev/md2), and then stripe /dev/md1 and /dev/md2 together to achieve RAID10 that is constituted of the currently existing (and presently active, in particular with respect to root and system) RAID1 device, and the new RAID1 system.
Thanks in advance; I appreciate your advice.
Last edited by the_answer_is_no; 06-01-2008 at 01:56 AM.
Just FYI, it would be better for the community to post communication to the forum than outside as other readers may benefit from any solutions provided.
Also, if finding solutions to a problem is proving difficult, others interested in offering ideas will not have to start from scratch by reading what has already been tried.
Just FYI, it would be better for the community to post communication to the forum than outside as other readers may benefit from any solutions provided.
Understood. I really only meant that someone wanting to reply to my query who wanted to contact me off-list for whatever reason could do so on the link provided, and not that replies to this query should be sent to me off-list. The link was meant as an alternative, not the primary. However, having re-read my positing, I agree that it does seem as if I was instructing folks to reply to my query off-list, and that was not my intention at all. Thanks for pointing this out to me.
Because I want to create a RAID10 device from two RAID1 devices:
/dev/md1 - which comprises the partitions:
/dev/sda2 -- on HDD1
/dev/sdb2 -- on HDD2
... and a new RAID1 device I wish to create:
/dev/md2 - which will comprise the partitions:
/dev/sdc2 -- on HDD3
/dev/sdd2 -- on HDD4
The RAID10 device will of necessity require that /dev/md1 and /dev/md2 be striped, or if you like, it will be a stripe of /dev/md1 and /dev/md2.
This would normally be easy to do, except in my case, a lack of foresight has meant that I have the entire root file system on /dev/md1 -- one of the RAID1 devices that I will need to stripe. Because:
Quote:
An existing file system cannot be converted directly to a stripe. To place an existing file system on a stripe, you must back up the file system, create the stripe, then restore the file system to the stripe.
... I'm in a bit of a bind because the particular filesystem I wish to stripe happens to have /root and all the other system stuff. That means that I wouldn't be able to just go ahead and stripe the two RAID1 devices without trashing my system in the process. I can easily back-up the filesystem I wish to stripe (I already have), it's just that I can't see how I'll be able to carry out the striping and keep my system up at the same time.
Is there a way to keep the existing system while creating the RAID10 device as described?
I am not an expert with RAID configurations, but I do know a few things.
First, the safest way to migrate data is to backup (which you already have). Like that quote you have, save the original data, start from a fresh creation of the RAID volume and restore. Now, keeping your system up while doing this... what you could do is have a second system that can be used as a temporary replacement. If this is not possible (e.g. cost, time) then all I can think of is to notify whoever needs the system of down time, and perform this migration through the night when the system load is at its lowest.
I know of some RAID cards that can do RAID repairs/rebuilds, expansions and migrations while the system is online (it performs calculations to generate the missing data from the other disks if you have parity). You said you have a fake RAID configuration? This would probably be difficult, but then again I'm not an expert.
Yes, the things you suggest do need to be considered if I am to proceed with this.
Quote:
I know of some RAID cards that can do RAID repairs/rebuilds, expansions and migrations while the system is online (it performs calculations to generate the missing data from the other disks if you have parity). You said you have a fake RAID configuration? This would probably be difficult, but then again I'm not an expert.
Mine is a "fake" RAID configuration, assuming by fake you mean software RAID, so I don't have the repair/rebuild option available to me that hardware RAID may offer. Oh well.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.