LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (http://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Converting existing RAID1 (where /root, /swap, /usr, and /var reside) to RAID10 (http://www.linuxquestions.org/questions/linux-newbie-8/converting-existing-raid1-where-root-swap-usr-and-var-reside-to-raid10-646147/)

the_answer_is_no 05-31-2008 11:26 PM

Converting existing RAID1 (where /root, /swap, /usr, and /var reside) to RAID10
 
Hiya! I'm running a Hardy Heron (X)Ubuntu system:

Code:

sp@barbaro:~$ uname -a
Linux barbaro 2.6.24-17-generic #1 SMP Thu May 1 13:57:17 UTC 2008 x86_64 GNU/Linux

sp@barbaro:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=8.04
DISTRIB_CODENAME=hardy
DISTRIB_DESCRIPTION="Ubuntu 8.04"

It has 4x160 GB SATA harddrives, in which the HDDs are presently in (software) RAID1 in the following way:

Code:

sp@barbaro:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[0] sdb2[1]
      156055296 blocks [2/2] [UU]
     
md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      192640 blocks [4/4] [UUUU]
     
unused devices: <none>

Note that md0 is a 4-way mirror of the devices /dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1, while md1 is presently just a simple RAID1 mirror of devices /dev/sda2 and /dev/sdb2. The remaining devices /dev/sdc2 and /dev/sdd2 are presently not involved in the configuration. I left these out when I originally set my system up because I couldn't figure out how to get them into RAID10 (or RAID1+0, if you like) with the other two devices /dev/sda2 and /dev/sdb2. It's my plan now to finally complete the set-up and create the RAID10 device md10 from the existing mirror md1 and a new one md2, the last which will be made up of /dev/sdc2 and /dev/sdd2. Please note that the system and root partitions are mounted and active already in RAID, so whatever changes I need to make that temporarily deactivates the RAID device that contains them will make the system unavailable.

Here is some more information.

The RAID device md0 contains /boot :

Code:

sp@barbaro:~$ sudo fdisk -l /dev/md0

Disk /dev/md0: 197 MB, 197263360 bytes
2 heads, 4 sectors/track, 48160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table


... and md1 contains the rest of the filesystems, including root, swap, etc.:

Code:

sp@barbaro:~$ sudo fdisk -l /dev/md1

Disk /dev/md1: 159.8 GB, 159800623104 bytes
2 heads, 4 sectors/track, 39013824 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

The entire system on md1 is in LVM(2) in the following way:

Code:

sp@barbaro:~$ sudo pvdisplay
  --- Physical volume ---
  PV Name              /dev/md1
  VG Name              vol_grp
  PV Size              148.83 GB / not usable 1.75 MB
  Allocatable          yes (but full)
  PE Size (KByte)      4096
  Total PE              38099
  Free PE              0
  Allocated PE          38099
  PV UUID              8394Rm-AU72-6RZ7-5RQG-NWdR-R74D-wuREIu

Code:

sp@barbaro:~$ sudo lvscan
  ACTIVE            '/dev/vol_grp/swap' [2.00 GB] inherit
  ACTIVE            '/dev/vol_grp/root' [3.50 GB] inherit
  ACTIVE            '/dev/vol_grp/home' [10.00 GB] inherit
  ACTIVE            '/dev/vol_grp/var-log' [2.00 GB] inherit
  ACTIVE            '/dev/vol_grp/tmp' [5.00 GB] inherit
  ACTIVE            '/dev/vol_grp/usr' [20.00 GB] inherit
  ACTIVE            '/dev/vol_grp/var' [20.00 GB] inherit
  ACTIVE            '/dev/vol_grp/usr-local' [5.00 GB] inherit
  ACTIVE            '/dev/vol_grp/store' [81.32 GB] inherit


... such that the current situation looks like this:

Code:

sp@barbaro:~$ sudo df -h
Filesystem            Size Used Avail Use% Mounted on
/dev/mapper/vol_grp-root
                      3.5G  331M  3.2G  10% /
varrun                2.0G  116K  2.0G  1% /var/run
varlock              2.0G  4.0K  2.0G  1% /var/lock
udev                  2.0G  124K  2.0G  1% /dev
devshm                2.0G    0  2.0G  0% /dev/shm
lrm                  2.0G  43M  1.9G  3% /lib/modules/2.6.24-17-generic/volatile
/dev/mapper/vol_grp-home
                      10G  86M  10G  1% /home
/dev/mapper/vol_grp-tmp
                      5.0G  33M  5.0G  1% /tmp
/dev/mapper/vol_grp-usr
                      20G  1.3G  19G  7% /usr
/dev/mapper/vol_grp-usr--local
                      5.0G  33M  5.0G  1% /usr/local
/dev/mapper/vol_grp-var
                      20G  354M  20G  2% /var
/dev/mapper/vol_grp-var--log
                      2.0G  40M  2.0G  2% /var/log
/dev/mapper/vol_grp-store
                      82G  7.4G  74G  10% /store

Code:

sp@barbaro:~$ sudo fdisk -l

Disk /dev/sda: 160.0 GB, 160000000000 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000ccf7b

  Device Boot      Start        End      Blocks  Id  System
/dev/sda1  *          1          24      192748+  fd  Linux raid autodetect
/dev/sda2              25      19452  156055410  fd  Linux raid autodetect

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d1b00

  Device Boot      Start        End      Blocks  Id  System
/dev/sdb1  *          1          24      192748+  fd  Linux raid autodetect
/dev/sdb2              25      19457  156095572+  fd  Linux raid autodetect

Disk /dev/sdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002dc62

  Device Boot      Start        End      Blocks  Id  System
/dev/sdc1  *          1          24      192748+  fd  Linux raid autodetect
/dev/sdc2              25      19457  156095572+  fd  Linux raid autodetect

Disk /dev/sdd: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x1c9c2d38

  Device Boot      Start        End      Blocks  Id  System
/dev/sdd1  *          1          24      192748+  fd  Linux raid autodetect
/dev/sdd2              25      19457  156095572+  fd  Linux raid autodetect

Disk /dev/md0: 197 MB, 197263360 bytes
2 heads, 4 sectors/track, 48160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 159.8 GB, 159800623104 bytes
2 heads, 4 sectors/track, 39013824 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

I'm using ReiserFS across the board on all mounts, from /boot to /store.

Whew!

Just to re-iterate, what I'd like to do is to involve the remaining, unused drives /dev/sdc2 and /dev/sdd2, RAID them up in a RAID1 device (/dev/md2), and then stripe /dev/md1 and /dev/md2 together to achieve RAID10 that is constituted of the currently existing (and presently active, in particular with respect to root and system) RAID1 device, and the new RAID1 system.

Thanks in advance; I appreciate your advice.

student04 06-01-2008 02:47 AM

Quote:

Originally Posted by the_answer_is_no (Post 3170795)
Thanks in advance; I appreciate your advice.


Click here to email me off-list

Just FYI, it would be better for the community to post communication to the forum than outside as other readers may benefit from any solutions provided.

Also, if finding solutions to a problem is proving difficult, others interested in offering ideas will not have to start from scratch by reading what has already been tried.

And welcome to LQ.org :)

-AM

the_answer_is_no 06-01-2008 03:03 AM

Quote:

Just FYI, it would be better for the community to post communication to the forum than outside as other readers may benefit from any solutions provided.
Understood. I really only meant that someone wanting to reply to my query who wanted to contact me off-list for whatever reason could do so on the link provided, and not that replies to this query should be sent to me off-list. The link was meant as an alternative, not the primary. However, having re-read my positing, I agree that it does seem as if I was instructing folks to reply to my query off-list, and that was not my intention at all. Thanks for pointing this out to me.

Quote:

And welcome to LQ.org
Thanks Alex.

the_answer_is_no 06-01-2008 04:09 AM

Alex, I removed the off-list link.:)

Just an update, perhaps to clarify the situation.

Because I want to create a RAID10 device from two RAID1 devices:
  • /dev/md1 - which comprises the partitions:
    • /dev/sda2 -- on HDD1
    • /dev/sdb2 -- on HDD2
... and a new RAID1 device I wish to create:
  • /dev/md2 - which will comprise the partitions:
    • /dev/sdc2 -- on HDD3
    • /dev/sdd2 -- on HDD4

The RAID10 device will of necessity require that /dev/md1 and /dev/md2 be striped, or if you like, it will be a stripe of /dev/md1 and /dev/md2.

This would normally be easy to do, except in my case, a lack of foresight has meant that I have the entire root file system on /dev/md1 -- one of the RAID1 devices that I will need to stripe. Because:
Quote:

An existing file system cannot be converted directly to a stripe. To place an existing file system on a stripe, you must back up the file system, create the stripe, then restore the file system to the stripe.
(from: http://docs.sun.com/app/docs/doc/806...f2ve3ga?a=view)

... I'm in a bit of a bind because the particular filesystem I wish to stripe happens to have /root and all the other system stuff. That means that I wouldn't be able to just go ahead and stripe the two RAID1 devices without trashing my system in the process. I can easily back-up the filesystem I wish to stripe (I already have), it's just that I can't see how I'll be able to carry out the striping and keep my system up at the same time.

Is there a way to keep the existing system while creating the RAID10 device as described?

Thanks again.

student04 06-01-2008 02:03 PM

I am not an expert with RAID configurations, but I do know a few things.

First, the safest way to migrate data is to backup (which you already have). Like that quote you have, save the original data, start from a fresh creation of the RAID volume and restore. Now, keeping your system up while doing this... what you could do is have a second system that can be used as a temporary replacement. If this is not possible (e.g. cost, time) then all I can think of is to notify whoever needs the system of down time, and perform this migration through the night when the system load is at its lowest.

I know of some RAID cards that can do RAID repairs/rebuilds, expansions and migrations while the system is online (it performs calculations to generate the missing data from the other disks if you have parity). You said you have a fake RAID configuration? This would probably be difficult, but then again I'm not an expert.

-AM

the_answer_is_no 06-02-2008 10:17 AM

Thanks again Alex.

Yes, the things you suggest do need to be considered if I am to proceed with this.

Quote:

I know of some RAID cards that can do RAID repairs/rebuilds, expansions and migrations while the system is online (it performs calculations to generate the missing data from the other disks if you have parity). You said you have a fake RAID configuration? This would probably be difficult, but then again I'm not an expert.
Mine is a "fake" RAID configuration, assuming by fake you mean software RAID, so I don't have the repair/rebuild option available to me that hardware RAID may offer. Oh well.


All times are GMT -5. The time now is 09:27 PM.