There still was work to do. I realized that after growing my raid 5 to use all four drives that only the Volumegroup on top of the raid was resized to the whole /dev/md0 but not the LogicalVolume in it:
Code:
lvdisplay /dev/MyVolumeGroup/MyRootVolume
--- Logical volume ---
LV Name /dev/MyVolumeGroup/MyRootVolume
VG Name MyVolumeGroup
LV UUID ud63WI-Pwu4-97Rx-lSE5-mYdF-yPmP-mEDX0a
LV Write Access read/write
LV Status available
# open 1
LV Size 216,12 GiB
Current LE 55327
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
MyRootVolume should occupy almost all space of the RAID (With one LVM VolumeGroup spanned over the whole RAID device). With 4 x 120GB Harddrives I should have about 3 x 120GB (One drive for parity data in RAID5) available.
I only have one Root Volume and one swap Volume (8GB).
Let's verify that the VolumeGroup on /dev/md0 has free space available:
Code:
vgdisplay /dev/MyVolumeGroup
--- Volume group ---
VG Name MyVolumeGroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 335,36 GiB
PE Size 4,00 MiB
Total PE 85853
Alloc PE / Size 57235 / 223,57 GiB
Free PE / Size 28618 / 111,79 GiB
VG UUID PHmGGp-nKv1-ewUr-9TcX-j2Cu-YbH3-0DZp7P
So I needed to add those 111,79 GB to the LogicalVolume:
Code:
lvresize -L +111,78GB /dev/MyVolumeGroup/MyRootVolume
Rounding up size to full physical extent 111,78 GiB
Extending logical volume MyRootVolume to 327,90 GiB
Logical volume MyRootVolume successfully resized
Let's verify that the LogicalValue has grown:
Code:
lvdisplay /dev/MyVolumeGroup/MyRootVolume
--- Logical volume ---
LV Name /dev/MyVolumeGroup/MyRootVolume
VG Name MyVolumeGroup
LV UUID ud63WI-Pwu4-97Rx-lSE5-mYdF-yPmP-mEDX0a
LV Write Access read/write
LV Status available
# open 1
LV Size 327,90 GiB
Current LE 83943
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 6144
Block device 253:0
So MyRootVolume has grown from 216,12 to 327,90 GiB.
According to the guide referenced below I have to make sure that the file system of the LogicalVolume has grown too. Let‘s see if it has:
Code:
df -kh
Dateisystem Größe Benut Verf Ben%% Eingehängt auf
/dev/mapper/MyVolumeGroup-MyRootVolume
217G 188G 22G 90% /
udev 995M 8,0K 995M 1% /dev
tmpfs 402M 1,1M 401M 1% /run
none 5,0M 0 5,0M 0% /run/lock
none 1005M 160K 1004M 1% /run/shm
/dev/mapper/MyVolumeGroup-MyRootVolume
217G 188G 22G 90% /home
So we see that MyRootVolume only has 217 GB in size and still reflects the old size. Only 22GB are left on /.
I can resize the RootVolume (Mountpoint /) with
Code:
btrfs filesystem resize max /
Resize '/' of 'max'
Verify that we have now 111GB more space for files:
Code:
df -kh
Dateisystem Größe Benut Verf Ben%% Eingehängt auf
/dev/mapper/MyVolumeGroup-MyRootVolume
328G 188G 134G 59% /
udev 995M 8,0K 995M 1% /dev
tmpfs 402M 1,1M 401M 1% /run
none 5,0M 0 5,0M 0% /run/lock
none 1005M 160K 1004M 1% /run/shm
/dev/mapper/MyVolumeGroup-MyRootVolume
328G 188G 134G 59% /home
We are done!
A good guide to LVM resizing:
http://www.tcpdump.com/kb/os/linux/l...de/expand.html