LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 05-29-2007, 11:18 AM   #1
jostmart
LQ Newbie
 
Registered: Jul 2006
Posts: 8

Rep: Reputation: Disabled
Convert RAID 1 system into RAID 1+0


Hi

I am wondering how I can convert my RAID 1 system inte RAID 1+0 (RAID 10).
The system has data on it that should NOT be destroyed!

I am using software RAID with mdadm now.
 
Old 05-29-2007, 11:53 AM   #2
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
A minimal RAID-1 is typically two drives + spares. A minimal RAID-10 is typically four drives + spares. What do you have to work with, and what drives will you be using to build the new array?
 
Old 05-29-2007, 12:37 PM   #3
jostmart
LQ Newbie
 
Registered: Jul 2006
Posts: 8

Original Poster
Rep: Reputation: Disabled
what I have

I have 2 SATA-II disks that are used in the RAID-1 and 2 unused disks (same model). The chassi doesn't have room for more than 4 disks, so thats the limit.
 
Old 05-29-2007, 05:00 PM   #4
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
OK, I'll assume your existing RAID-1 is /dev/md0, composed of /dev/sda and /dev/sdb. The unused drives I'll call /dev/sdc and /dev/sdd. The process is:

- Create a new RAID-1, /dev/md1, with the two unused disks.

mknod /dev/md1 b 9 1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[cd]
mke2fs -j /dev/md1

- Split the original RAID-1, /dev/md0, by failing and removing one of the drives.

mdadm /dev/md0 --fail /dev/sdb --remove /dev/sdb

- Create the RAID-10 (RAID-0 of the RAID-1 arrays) /dev/md2:

mknod /dev/md2 b 9 2
mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md[01]

- Initialize the RAID-10 array (your data is safe on the split drive, /dev/sdb):

mke2fs -j /dev/md2

- Mount the new RAID-10:

mkdir mount1
mount -t ext3 /dev/md2 mount1/

- Mount the drive split off from the original RAID-1 (/dev/sdb):

mkdir mount2
mount -t ext3 /dev/sdb mount2/

- Copy the data from the split drive (/dev/sdb) to the RAID-10:

rsync -avHxl --progress --inplace --exclude 'lost+found' mount2/ mount1/

- Re-add the drive that was split from the original RAID-1 (/dev/sdb):

umount mount1
umount mount2
mdadm /dev/md0 --re-add /dev/sdb

- Done!

At each step you have your data safe. Don't proceed to the next step unless you have confirmed data integrity from the previous step.
 
1 members found this post helpful.
Old 06-01-2008, 03:35 AM   #5
the_answer_is_no
LQ Newbie
 
Registered: May 2008
Posts: 5
Blog Entries: 1

Rep: Reputation: 1
Hi there! That's a very concise recipe you've supplied, macemoneta. Thanks!

However, I have a situation in which I don't think I can use your approach, or perhaps it may indeed be able to be used, but because I don't properly understand what's going on, I'm left wondering how I'd be able to apply it to my case.

What if the existing RAID-1 device, composed of /dev/sda2 and /dev/sdb2, is where /root, /usr, /var, /proc, /swap, /lib... and all the system files are residing? How can one create the RAID-10 device (which would be RAID-0 of the RAID-1 arrays) without killing your system in the process?

I understand that you must first split the original RAID-1, /dev/md0, by failing and removing one of the drives, but wouldn't you still bring your system down in the process of striping /dev/md0 and /dev/md1? The striping destroys your data, and even though you have a healthy copy of all the files and data, you wouldn't then be able to boot into it to resume the operation.

Again, I probably have not understood what you're suggesting, so I'd really appreciate you shedding some more light on the topic.

Thanks in advance for your elucidations.
 
1 members found this post helpful.
Old 05-17-2009, 09:52 PM   #6
skipdashu
LQ Newbie
 
Registered: Apr 2009
Location: República de Tejas, Centro
Distribution: Ubunut, Xubuntu, Dotsch/UX
Posts: 19

Rep: Reputation: 0
4 = drives but slight diff in size

NEVER MIND... it's always more obvious when you re-read AFTER saving.... the one mirror is sdb & sdc... the other is sde1 & sdd1!

DELETE ME!

Last edited by skipdashu; 06-19-2009 at 07:55 PM. Reason: no longer a question
 
Old 05-19-2009, 01:37 AM   #7
skipdashu
LQ Newbie
 
Registered: Apr 2009
Location: República de Tejas, Centro
Distribution: Ubunut, Xubuntu, Dotsch/UX
Posts: 19

Rep: Reputation: 0
can't format /dev/md2

Code:
sudo mkfs.ext3 /dev/md2
mke2fs 1.41.3 (12-Oct-2008)
mkfs.ext3: Device size reported to be zero.  Invalid partition specified, or partition table wasn't reread after running fdisk, due to a modified partition being busy and in use.  You may need to reboot to re-read your partition table.
I thought I had the 2nd (new) mirror set up as it started and "sync'd" and has just been sitting spinning, not mounted for a couple days till I got back to this tonight.

Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdd[0] sde[1]
      39062400 blocks [2/2] [UU]
      
md0 : active raid1 sdb[0]
      39062400 blocks [2/1] [U_]
      
unused devices: <none>
I have /dev/sdc 'failed' and 'removed' to build the stripe.

sda is the Ubuntu v8.10(Dotsch/UX) OS drive and is not involved.

md0 is normally mounted and has been running w/o problems for weeks. md0 is normally comprised of /dev/sdb and /dev/sdc.

I have md0 and md1 in /etc/mdadm/mdadm.conf:
Code:
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR skip@

# definitions of existing MD arrays

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=252431f7:7393664c:180fba5d:ff5e4642
   devices=/dev/sdb,/dev/sdc

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=54693081:6615cf45:180fba5d:ff5e4642
   devices=/dev/sdd,/dev/sde
The fdisk -l output differences between the two mirrors bothers me greatly but I don't know enough about what it's telling me to know what to do.
Code:
Disk /dev/sdb: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        4863    39062016   fd  Linux raid autodetect

Disk /dev/sde: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        4863    39062016   fd  Linux raid autodetect
Why do the md0 drives say "doesn't contain a valid partition table" but the md1 drives list the single partition (sd?1) and say it's raid auto? I suspect I formatted the individual drives differently when I built md0 vs formatting md1 drives. Remember the 1st time around I'd built md1 as devices sdd1 and sde1 instead of sdd and sde.

Note that mdadm detailed scan doesn't see a /dev/md2 but I have done these:
Code:
sudo mknod /dev/md2 b 9 2
sudo mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md0 /dev/md1
Code:
sudo mdadm -Ds
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=252431f7:7393664c:180fba5d:ff5e4642
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=54693081:6615cf45:180fba5d:ff5e4642
But this shows maybe why:
Code:
udo mdadm -vv --examine --scan /dev/md2
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: No md superblock detected on /dev/md2.
If nothing else can someone tell me how to make both md2 and md1 go completely away so I can start over w/ formatting sdd and sde?

PS: Actually I'm tempted to try and tar off md0 and really start from ground 0 with all 4 drives just to make sure I get it down and get them all exactly the same... but that may be overkill.

Last edited by skipdashu; 06-19-2009 at 08:02 PM.
 
Old 05-20-2009, 04:59 PM   #8
skipdashu
LQ Newbie
 
Registered: Apr 2009
Location: República de Tejas, Centro
Distribution: Ubunut, Xubuntu, Dotsch/UX
Posts: 19

Rep: Reputation: 0
On the off chance....

That somebody someday reads this thread and happens into some of my same questions... Here's what I found out so far (and you'll notice any commands shown have been ubuntuized):

Quote:
Originally Posted by skipdashu View Post
sda is the Ubuntu v8.10(Dotsch/UX) OS drive and is not involved.

md0 is normally mounted and has been running w/o problems for weeks. md0 is normally comprised of /dev/sdb and /dev/sdc.

The fdisk -l output differences between the two mirrors bothers me greatly but I don't know enough about what it's telling me to know what to do.
Code:
Disk /dev/sdb: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        4863    39062016   fd  Linux raid autodetect

Disk /dev/sde: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        4863    39062016   fd  Linux raid autodetect
Why do the md0 drives say "doesn't contain a valid partition table" but the md1 drives list the single partition (sd?1) and say it's raid auto? I suspect I formatted the individual drives differently when I built md0 vs formatting md1 drives. Remember the 1st time around I'd built md1 as devices sdd1 and sde1 instead of sdd and sde.
The difference was in how you format the drives. The md0 devices (sdb and sdc) were NOT individually formated. md0 was created from two drives that had all partitions deleted. The mdadm create was done and THEN the array was formatted with mkfs.ext3 /dev/md0 similar to what is described in the above thread.

The md1 devices were each formatted prior to building the array with fdisk as detailed here:

Quote:
$ fdisk /dev/hdX

Type ‘n’, create a logic partition. Write down the cylinders you used because the partitions on the others disk have to fill the same exact amount of space (not required for full = drive mirror). Set it’s type to ‘fd’ by typing t. Finally, save the changes by typing w.

...remember once again than you have to follow this procedure twice, once of earch hard drive.
It would seem to me formatting the partitions first would be the way to go if you were mirroring partitions instead of whole drives. It looks like I'll be doing this on another machine shortly on order to create a mirrored partition to be mounted as a generic 'data' drive and another partition to be mounted for BackupPC to use as it's pool/archive storage drive both on a mirror of two 500GB drives.

Quote:
Originally Posted by skipdashu View Post
If nothing else can someone tell me how to make both md2 and md1 go completely away so I can start over w/ formatting sdd and sde?
While I haven't tried to build md2 yet (waiting for something to finish) I did rebuild md1. First I made sure md1 wasn't mounted. Then you can simple STOP the array with:
Code:
sudo mdadm -S /dev/md1
It did complain about md1 being a part of another array (md2) but allows you to go ahead anyway.

Since I had the GUI up I went ahead and used Gparted (System->Parition Editior in Ubuntu) and deleted sdd1 and sdde1 leaving essentially bare metal drives.

OK, Now with sdd and sde void of any partitions I redid the mdadm create as described above (I didn't redo the mknod). The array md1 started and began sync'ing. While it was still sync'ing I decided to see if the suggestion that you can format during that process would work so went ahead and did:
Code:
sudo mkfs.ext3 /dev/md1
Both finished fine as evidenced by:
Code:
sudo head /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sde[1] sdd[0]
      39062400 blocks [2/2] [UU]
      
md0 : active raid1 sdb[0] sdc[1]
      39062400 blocks [2/2] [UU]
      
unused devices: <none>
And just to prove out that it was formatting method that caused the previous differences in the output from fdisk -l, I'll redo that (cutting cut out the sda stuff since it's the non-raid boot/OS drive:
Code:
sudo fdisk -l

Disk /dev/sdb: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sde doesn't contain a valid partition table
Ahhh, they symmetry is a thing of beauty to us AR types ;-)

I'll add anything of possible value here once I get md2 built.

Last edited by skipdashu; 06-19-2009 at 08:01 PM.
 
Old 11-06-2011, 11:42 PM   #9
oziemike
LQ Newbie
 
Registered: Jun 2004
Distribution: Mandrake
Posts: 4

Rep: Reputation: 0
Similar situation

Hi guys

My situation is fairly similar, except I already have md0 & md1 on each of the first two existing drives. They consist of sdb1 (swap) and sdb2 (/) each and the same on sdc1 and sdc2. This is currently Raid 1.

To follow the guide earlier published, can anyone tell me how to go about this procedure to drives 3 & 4 to make the new RAID 10 configuration??

Would appreciate any help.

Mike
 
  


Reply

Tags
mdadm, raid, raid1, raid10



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Gave up on HPT374 driver, suggestions please for RAID 5 card/cabable mobo for RAID 5? Phoenix_Zero Linux - Hardware 10 03-25-2007 05:15 AM
Convert to SW RAID with LVM2 on existing disk atman1974 Linux - General 1 05-14-2006 11:40 PM
Is it possible to convert from using hardware RAID to software RAID? kindredstar Linux - Hardware 2 12-19-2005 09:13 AM
does linux support the sata raid and ide raid in k7n2 delta ilsr? spyghost Linux - Hardware 10 04-16-2004 05:27 AM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 12:47 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration