Adding disk space to root logical volume with pvresize and lvextend
Hi All
I've got an HP ML370 G5 server running ubuntu hardy with a RAID 1+0 setup and a root logical volume that's in need of some extra disk space. I'm pretty new to RAID and so far I've added two new 146G disks using the HP SmartArray utility. fdisk is showing the disk size as 440.3GB (increased from ~293GB) and df is still showing the root logical volume of 260G (full outputs below along with those of pvdisplay, vgdisplay and lvdisplay). I now need to extend the root logical volume and after reading various posts I'm still unsure. My current plan of action is: 1. extend /dev/cciss/c0d0p2 using fdisk 2. extend /dev/cciss/c0d0p5 using "pvresize /dev/cciss/c0d0p5" 3. extend /dev/myserver/root with "lvextend --size +140G /dev/myserver/root" Does this sound reasonable or am I about to trash the server? Thanks Bort Code:
$ sudo fdisk -l Code:
$ df -h Code:
$ sudo pvdisplay |
Logical Volume Manager
Greetingz!
From what I can tell, you're using a Hardware RAID controller on the HP server, so the MD tools ("mdadm" command) haven't been used to setup the physical volumes imported into your volume group. (The "mdadm" command is typically used to setup a software-based RAID 0/1/0+1/5 array). So this seems to be an "LVM2-only" setup. You're on the right track, but you need; 4. Extend the filesystem with "resize2fs" With that said, I'm sure it goes without saying that you should probably make sure you have a current working backup of the whole server, just in case things go completely haywire. Word to the wise, if this is "LVM2" and not "Veritas StorageFoundation on Linux", then you're going to have to unmount the filesystem that you want to extend. (Check the output of the "mount" command to see if you have something other than ext2, ext3, proc, sys, usbfs, or tmpfs filesystems). There's a difference between extending the physical volume with "pvresize", extending the logical volume with "lvresize", and extending the filesystem with "resize2fs". You can modify physical volumes, volume groups (if needed, but not in this case), and logical volumes without unmounting things. However, resizing filesystems (with the exception of vxfs from Veritas) requires said filesystem to be *unmounted*. NOTE: You said this is the "root" logical volume you're trying to expand. Most of us stopped lumping systems into one big filesystem ages ago. Typically, I cut eight logical volumes from one volume group for just the system itself. I'll then create another volume group for "user data" if I have to give them room to play; Code:
sysop@hydra:mnt $ df -hFext4 Side Note #2: /dev/md0 is a software-based RAID1 created with MD tools. P.S: More than likely, you're going to have to boot off of a CD or DVD in order to resize that filesystem. |
Hi Xeleema
Your right, it is a hardware raid using an "LVM2-only" setup. I'll give it a go using resize2fs from a live disk. Thanks for the advice on splitting the file system. As far as I'm aware our current setup was the default Ubuntu configuration (could be wrong though...) I'll let you know how I get on Cheers Bort |
Unsuccesful pvresize
Still struggling with this. Extending /dev/cciss/c0d0p2 was fine. fdisk now shows:
Code:
$ sudo fdisk -l Cheers Bort |
Hm, did the "pvresize" command give you an error?
To verify that it was successful, you need to check the output of "pvdisplay /path/to/device", rather than "fdisk -l". Remember, a Physical Volume is LVM2 terminology referring to either a chunk of a disk (a partition), or a whole disk. It doesn't have any tie-in with what "fdisk" thinks disks are. The "fdisk" command will just show you how disks are divided up at the "partitions" level, based off of information the kernel (via the appropriate modules) was able to read. Think of it like a stack, you're at the top ("User") and at the bottom is the actual hard disk itself. LVM2 adds the following layers; "User" "Shell" "Filesystems" "Logical Volumes" (LVM2) "Volume Groups" (LVM2) "Physical Volumes" (LVM2) "Partitions" "Kernel" "Hard Drive" P.S: remember, you're going to have to unmount whatever filesystem you run "resize2fs" against. |
Hi Xeleema
There were no errors from pvresize. I did check pvdisplay (managed not to post it). Output is below, but is the same as before running the pvresize command. Cheers Bort Code:
$ sudo pvresize /dev/cciss/c0d0p5 Code:
$ sudo pvdisplay /dev/cciss/c0d0p5 |
Ah, okay, i think I know where the confusion is coming from.
Just to be clear, "fdisk" and the LVM2 commands don't really have anything to do with each other. So when "fdisk" tells you a partition is "Extended", that doesn't mean that "pvresize" "extended" the partition. As far as "fdisk" output is concerned, you have 2 kinds of partitions on a disk with a DOS label (trust me, you have a DOS label). There's "Primary" partitions, and "Extended" partitions. As far as "fdisk" is concerned, the only difference between those two is that an "Extended" partition can have multiple partitions stacked inside of it. Again, this is a feature of a DOS disk label, been around since '92 I think. Nothing at all to do with LVM2. Now, let me compare the "fdisk" outputs from the beginning of this thread, and just now. "fdisk" Starting Out: Code:
Disk /dev/cciss/c0d0: 440.3 GB, 440342896640 bytes "fdisk" Output Now: Code:
Disk /dev/cciss/c0d0: 440.3 GB, 440342896640 bytes So, the solution might be simpler than we thought. 1. Create an additional partition with fdisk inside of c0d0p2. You will have to specify "Extended" rather than "Primary" 2. Check your "fdisk -l" output for the device name of the new partition. Be sure to set the "type" of the partition to "Linux LVM" (the same as c0d0p5). 3. Use "pvcreate" on that new device. 4. Use "vgextend" to add the new physical volume to your volume group. 5. Confirm that the size of your volume group has expanded with "vgdisplay". 6. Use "lvextend" to make your logical volume(s) bigger. 7. Use "resize2fs" to make your filesystems bigger. NOTE: You will have to unmount whichever filesystems those would be. Remember that you have only one filesystem per logical volume. Also, if this is the "root" filesystem, you will have to boot off of a CD/DVD with LVM support prior to doing this, as the system will not let you resize a filesystem 'hot' (while it's mounted). It's also been my experience that "resize2fs" wants a fsck done to the filesystem before it runs, regardless of it's clean/dirty bit state. Sound like a plan? |
Quote:
Cheers Bort |
*BUMP*
So how's it going? |
Worked brilliantly so far! Am away from the server so will do the resize2fs bit tomorrow when I can stick the live disk in.
Cheers Bort Quote:
|
Excellent! If you have any more questions, feel free to post back to this thread.
Otherwise, if you could click the blue thumb at the bottom of any posts that helped you out, I'd appreciate it! |
All done. From the live disk I had to:
Code:
$ sudo apt-get install lvm2 Thanks for all the help xeleema. Your advice has been excellent. Cheers Bort Quote:
|
That's great news, Bort!
Power to the Penguins! |
resize2fs
Just to update this thread, as of linux kernel 2.6.11 and with e2fsprogs package > 1.39.1 you can now resize an ext3 while it's mounted. I just recently did this on a raid1 mirror (2 drives 300GB + 500GB) where I upgraded one of the members from 300GB to 1TB. Once done I ran resize2fs and it worked flawlessly.
The output looks like this when it's resizing the share: Code:
% resize2fs /dev/lvm-raid/lvm0 |
All times are GMT -5. The time now is 01:27 PM. |