Share your knowledge at the LQ Wiki.
Go Back > Forums > Linux Forums > Linux - Server
User Name
Linux - Server This forum is for the discussion of Linux Software used in a server related context.


Search this Thread
Old 03-31-2011, 05:24 AM   #1
LQ Newbie
Registered: Jun 2008
Posts: 4

Rep: Reputation: 0
mdadm cannot grow raid1 over lvm

Hi all!
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below:
Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux

Storage configuration:
Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively.
The rest of space, 970 Gb on all 4 sata disks are used as raid10.
There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles).
/ , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):

root@xen2:~# df -h
Filesystem Size Used Avail Use% Mounted on
9.2G 6.0G 2.8G 69% /
tmpfs 7.6G 0 7.6G 0% /lib/init/rw
udev 7.1G 316K 7.1G 1% /dev
tmpfs 7.6G 0 7.6G 0% /dev/shm
/dev/md3 223M 31M 180M 15% /boot
9.2G 150M 8.6G 2% /home
9.2G 2.5G 6.3G 29% /var

About 900 Gb on "xenlvm" volume group are left free to create new logical volumes, wich are expected to use as block devices for raid1 partitions. One member of such array is local logical volume and the second is an Ata over Ethernet device (that's Coraid invention to use SATA drives to broadcat over ethernet for network usage, bypassing TCP/IP to incrase perfomance and throughput):
It's name (of this aoe device) is e.g. e0.1.
We need such complications to run Xen vm-s. This vm's will use raid1 devices for storing their data. And if one of two hosts (xen1 or xen2) die with catastrofic failure the second will hold virtual machine block device so we can start vm's runed on that host n that host.
This two servers have 2 ethernet devices on each. One of eth dev (eth1 on each) is comunicating with our lan (to connect to the server). The second eth dev (eth0 on each) is connected with another server using ethernet cross connection with 1 Gbit/s throughput to run ata over ethernet technology.

All that madness is running well

Here is an example now:

20gigs - name of logical volume on each of hosts (it's of 20 gigabyte size - Captain Obvious at your service )
On xen1 this volume is broadcasted using ata over ethernet as e0.1. Here is how it looks when running aoe-stat on xen2:
root@xen2:~# aoe-stat
e0.1 21.474GB eth0 4096 up

Now let's try creating raid1 array on xen2 using local 20gigs and aoe device e0.1:
mdadm -C /dev/md5 --level=1 --raid-devices=2 /dev/xenlvm/20gigs /dev/etherd/e0.1
It runs normaly.
The next step was to setup win OS as OS for vm on that raid array. Win vm is running well with pvops drivers.

Now the salt:
While testing performance of this configuration I was wondering: what if I have to extend disk space on that virtual machine - that's very usual scenario. So I tried to take corresponding steps:

1. Extending logical volumes named "20gigs" on both xen1 & xen2 hosts to 25 gigs:

root@xen2:~# lvextend -L+5G /dev/xenlvm/20gigs
root@xen2:~# lvdisplay /dev/XENLVM/20gigs
--- Logical volume ---
LV Name /dev/XENLVM/20gigs
LV UUID rp3mvd-C2Ld-S0tv-SBGE-YkyH-h2hh-5pckbr
LV Write Access read/write
LV Status available
# open 0
LV Size 25.00 GiB
Current LE 6400
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:7

Commenting straight away: I had to kill -9 aoe processes running aoe device e.01 on xen1 and to restart them again, cause ata over ethernet device can't change it's size on fly.

2. Next step - extend /dev/md5 raid1 size (block device for vm):
Here is corresponding `cat /proc/mdstat` record:

md5 : active (auto-read-only) raid1 dm-7[0] etherd/e0.2[1]
20970496 blocks super 1.2 [2/2] [UU]

Extending size using mdadm:

root@xen2:~# mdadm --grow /dev/md5 --size=max
mdadm: component size of /dev/md5 has been set to 20970496K

And here we see - nothing changed: device had 20Гб before grow attempt and it has the same size after.
First of all I thought it was AoE device problem. But that was wrong idea: I've created two logical volumes 5 Gb size each on xen2 host. As second step - extended them with 2 Gb each.
Created raid1 array /dev/md6 using them and run mkfs.ext2 on that array. Third step - tried to extend it using --grow option:

root@xen2:~# mdadm --grow /dev/md6 --size=max
mdadm: component size of /dev/md6 has been set to 5241856K
U see - the same problem.

Tried some "mdadm grow" howto-s after that: ition.html k-and-grow

No success. I tried to --fail one of the logical volumes (member of that array), and extend it when there is only one member disk in it:

root@xen2:~# mdadm /dev/md6 --fail /dev/XENLVM/test2 --remove /dev/XENLVM/test2
mdadm: set /dev/XENLVM/test2 faulty in /dev/md6

and --grow:

root@xen2:~# mdadm -G /dev/md6 -z max
mdadm: component size of /dev/md6 has been set to 5241856K

The same result as you can see

Sorry for this long story and my "best English".

Does anybody have any idea what's going wrong? And what steps can I take to resolve this issue with mdadm?
Old 03-31-2011, 11:17 PM   #2
Registered: Nov 2007
Posts: 67

Rep: Reputation: 4
check here may be this will solve your problem.
Old 04-01-2011, 02:34 AM   #3
LQ Newbie
Registered: Jun 2008
Posts: 4

Original Poster
Rep: Reputation: 0
Originally Posted by ashish_neekhra View Post
check here may be this will solve your problem.
Thanks for attention but that's not my case=(
My case is that I have raid1 above lvm above raid10. And the first one can't be resized.
Old 04-07-2011, 02:52 AM   #4
LQ Newbie
Registered: Jun 2008
Posts: 4

Original Poster
Rep: Reputation: 0
One good guy helped me on russian

You need to reassemly the array before growing it:
stop the array:
mdadm -S /dev/mdX

reassemble array saying it to update superblock information about devices size:
mdadm -A /dev/mdX -U devicesize /dev/volgroup/logvol[1-2]

finaly grow and be happy:
mdadm -G -z max /dev/mdX
Old 02-01-2012, 11:15 PM   #5
LQ Newbie
Registered: Feb 2011
Posts: 16

Rep: Reputation: 0
Similar question

Hello, I have a similar issue. If I have, let's say in a simple case, RAID 1 over two drives on a Dell R600 to form virtual disk 0. There is a volume group (VG00) that occupies 60 percent of this RAID. Can I extend VG00 by creating a device (fdisk -c /dev/sdx), pvcreate /dev/sdx, vgextend VG00 /dev/sdx and then lvextend/resize2fs a logical volume on VG00?

Is it as simple as this? Someone told me I have to "break the RAID" but I haven't found anything to verify this. Your thoughts on this. Thanks.


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Growing (really grow) a raid1 (mdadm --grow) didiw Linux - Software 1 01-20-2011 05:39 PM
[SOLVED] Implementing RAID1 mirror - hardware vs mdadm/LVM ternarybit Linux - Software 7 10-15-2009 06:28 PM
LXer: How To Resize LVM Software RAID1 Partitions (Shrink & Grow) LXer Syndicated Linux News 0 01-09-2009 02:11 PM
MDADM screws up the LVM partitions? Can't mount the LVMs after mdadm stopped alirezan1 Linux - Newbie 3 11-18-2008 04:42 PM

All times are GMT -5. The time now is 12:00 AM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration