LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 04-12-2012, 01:56 PM   #1
haerta
LQ Newbie
 
Registered: Apr 2012
Posts: 4

Rep: Reputation: 0
How to migrate from raid 6 to raid 5 using mdadm and LVM


Hello!

I read through ifferent forums and blogs but haven't found a detailed enough hint how to do it.

I currently have a raid 6 with 4 harddrives:

Code:
root@Gnoccho:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 sdc1[2] sda1[0] sdd1[3] sdb1[1]
      234435584 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
I would like to get rid of one harddrive and migrate to raid 5 with three harddrives (And one spare)

Can somebody tell me what commands I have to use? I don't know whether it is important: I have LVM configured to use the whole array with one volumegroup and two volumes (root and swap)

Thank you very much,

haerta

Last edited by haerta; 04-12-2012 at 02:13 PM.
 
Old 04-13-2012, 04:14 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,123

Rep: Reputation: 1260Reputation: 1260Reputation: 1260Reputation: 1260Reputation: 1260Reputation: 1260Reputation: 1260Reputation: 1260Reputation: 1260
I've never tried doing anything as drastic as what you are doing, but if you want to give it a try it should work.

You want to run mdadm --grow with the new raid parameters just like you were doing a create. md is smart enough to keep track of the old parameters and the new parameters. It will reshape a stripe at a time from old to new and keep track of how far it has gotten so that reads and writes to the old part will use the old shape and to the new part will use the new shape. It will take a long time and will not be good if it gets interrupted, since you could lose part of the stripe it is currently working on. So the commands would be something like:
Code:
cat /proc/mdstat
Make sure that it isn't in the middle of a rebuild. It should show a happy raid6 named /dev/md0 with 4 drives - let's call them sda1, sdb1, sdc1, sdd1 (YMMV)
Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3
mdadm may want to do this in two steps (like I said, I've never tried this) in which case you first need to remove a drive
Code:
mdadm --manage --remove /dev/md0 /dev/sdd1

Usual caveats: backup first, IANACS, following my advice may open the velociraptor cage, etc.

Good luck! Feel free to yell and scream if it doesn't work and I will refund your fee for my advice.
 
1 members found this post helpful.
Old 04-16-2012, 02:01 PM   #3
haerta
LQ Newbie
 
Registered: Apr 2012
Posts: 4

Original Poster
Rep: Reputation: 0
Thanks

Thanks four answer!

I don't need to yell or scream, I guess you gave the right hints.

I was unable to remove sdd1 because it was busy:
Code:
mdadm --manage --remove /dev/md0 /dev/sdd1
mdadm: hot remove failed for /dev/sdd1: Device or resource busy
So I tried to grow without removing the fourth drive.

Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3
mdadm: /dev/md0: Cannot grow - need backup-file
Ok, since I had a external harddrive available I tried following:

Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3 --backup-file=/media/120GB\ extern/mdadm-backupfile
At least my computer is doing something:

Code:
cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 sdc1[2] sdd1[3] sda1[0] sdb1[1]
      234435584 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [>....................]  reshape =  1.6% (1908736/117217792) finish=767.6min speed=2503K/sec
      
unused devices: <none>
I will post again if it did not work. Maybe then it's time to ask for refund ;-)
 
Old 04-19-2012, 05:26 PM   #4
haerta
LQ Newbie
 
Registered: Apr 2012
Posts: 4

Original Poster
Rep: Reputation: 0
All Done

I successfully migrated from raid 6 to raid 5 with one spare drive

I grew my raid 5 to get more free space (actually that was the reason why I wanted to get rid of my raid 6)
Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=4 --backup-file=/media/120GB\ extern/mdadm-backupfile
I ended with a nice raid 5 with 4 drives.

Thanks!
 
Old 04-20-2012, 05:17 PM   #5
haerta
LQ Newbie
 
Registered: Apr 2012
Posts: 4

Original Poster
Rep: Reputation: 0
There still was work to do. I realized that after growing my raid 5 to use all four drives that only the Volumegroup on top of the raid was resized to the whole /dev/md0 but not the LogicalVolume in it:

Code:
 lvdisplay /dev/MyVolumeGroup/MyRootVolume 
  --- Logical volume ---
  LV Name                /dev/MyVolumeGroup/MyRootVolume
  VG Name                MyVolumeGroup
  LV UUID                ud63WI-Pwu4-97Rx-lSE5-mYdF-yPmP-mEDX0a
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                216,12 GiB
  Current LE             55327
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
MyRootVolume should occupy almost all space of the RAID (With one LVM VolumeGroup spanned over the whole RAID device). With 4 x 120GB Harddrives I should have about 3 x 120GB (One drive for parity data in RAID5) available.
I only have one Root Volume and one swap Volume (8GB).

Let's verify that the VolumeGroup on /dev/md0 has free space available:

Code:
vgdisplay /dev/MyVolumeGroup
  --- Volume group ---
  VG Name               MyVolumeGroup
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               335,36 GiB
  PE Size               4,00 MiB
  Total PE              85853
  Alloc PE / Size       57235 / 223,57 GiB
  Free  PE / Size       28618 / 111,79 GiB
  VG UUID               PHmGGp-nKv1-ewUr-9TcX-j2Cu-YbH3-0DZp7P
So I needed to add those 111,79 GB to the LogicalVolume:

Code:
lvresize -L +111,78GB /dev/MyVolumeGroup/MyRootVolume
  Rounding up size to full physical extent 111,78 GiB
  Extending logical volume MyRootVolume to 327,90 GiB
  Logical volume MyRootVolume successfully resized
Let's verify that the LogicalValue has grown:
Code:
lvdisplay /dev/MyVolumeGroup/MyRootVolume 
  --- Logical volume ---
  LV Name                /dev/MyVolumeGroup/MyRootVolume
  VG Name                MyVolumeGroup
  LV UUID                ud63WI-Pwu4-97Rx-lSE5-mYdF-yPmP-mEDX0a
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                327,90 GiB
  Current LE             83943
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           253:0
So MyRootVolume has grown from 216,12 to 327,90 GiB.

According to the guide referenced below I have to make sure that the file system of the LogicalVolume has grown too. Let‘s see if it has:

Code:
df -kh
Dateisystem            Größe Benut  Verf Ben%% Eingehängt auf
/dev/mapper/MyVolumeGroup-MyRootVolume
                      217G  188G   22G  90% /
udev                  995M  8,0K  995M   1% /dev
tmpfs                 402M  1,1M  401M   1% /run
none                  5,0M     0  5,0M   0% /run/lock
none                 1005M  160K 1004M   1% /run/shm
/dev/mapper/MyVolumeGroup-MyRootVolume
                      217G  188G   22G  90% /home
So we see that MyRootVolume only has 217 GB in size and still reflects the old size. Only 22GB are left on /.

I can resize the RootVolume (Mountpoint /) with
Code:
btrfs filesystem resize max /
Resize '/' of 'max'
Verify that we have now 111GB more space for files:
Code:
df -kh
Dateisystem            Größe Benut  Verf Ben%% Eingehängt auf
/dev/mapper/MyVolumeGroup-MyRootVolume
                      328G  188G  134G  59% /
udev                  995M  8,0K  995M   1% /dev
tmpfs                 402M  1,1M  401M   1% /run
none                  5,0M     0  5,0M   0% /run/lock
none                 1005M  160K 1004M   1% /run/shm
/dev/mapper/MyVolumeGroup-MyRootVolume
                      328G  188G  134G  59% /home

We are done!

A good guide to LVM resizing:http://www.tcpdump.com/kb/os/linux/l...de/expand.html

Last edited by haerta; 04-20-2012 at 07:45 PM.
 
Old 04-20-2012, 06:16 PM   #6
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,099

Rep: Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117
Now there is an interesting concoction - btrfs on LVM RAID ...
People seem to go one way or the other, not both.

Glad you got it all working.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Migrate RAID0 array to RAID10 with mdadm jimbro727 Linux - General 6 07-17-2009 07:01 PM
Dell/Intel ICH7 soft-RAID and mdadm raid-level mistake PhilipTheMouse Linux - General 0 03-14-2009 05:59 PM
MDADM screws up the LVM partitions? Can't mount the LVMs after mdadm stopped alirezan1 Linux - Newbie 3 11-18-2008 04:42 PM
Migrate to a raid1 with mdadm Ossar Linux - Software 2 08-19-2008 11:55 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 01:15 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration