LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   Adding disk space to root logical volume with pvresize and lvextend (https://www.linuxquestions.org/questions/linux-software-2/adding-disk-space-to-root-logical-volume-with-pvresize-and-lvextend-762333/)

bortbortresson 10-16-2009 05:04 AM

Adding disk space to root logical volume with pvresize and lvextend
 
Hi All

I've got an HP ML370 G5 server running ubuntu hardy with a RAID 1+0 setup and a root logical volume that's in need of some extra disk space. I'm pretty new to RAID and so far I've added two new 146G disks using the HP SmartArray utility. fdisk is showing the disk size as 440.3GB (increased from ~293GB) and df is still showing the root logical volume of 260G (full outputs below along with those of pvdisplay, vgdisplay and lvdisplay).

I now need to extend the root logical volume and after reading various posts I'm still unsure. My current plan of action is:

1. extend /dev/cciss/c0d0p2 using fdisk
2. extend /dev/cciss/c0d0p5 using "pvresize /dev/cciss/c0d0p5"
3. extend /dev/myserver/root with "lvextend --size +140G /dev/myserver/root"

Does this sound reasonable or am I about to trash the server?

Thanks

Bort

Code:

$ sudo fdisk -l

Disk /dev/cciss/c0d0: 440.3 GB, 440342896640 bytes
255 heads, 63 sectors/track, 53535 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000af15d

          Device Boot      Start        End      Blocks  Id  System
/dev/cciss/c0d0p1  *          1          31      248976  83  Linux
/dev/cciss/c0d0p2              32      35690  286430917+  5  Extended
/dev/cciss/c0d0p5              32      35690  286430886  8e  Linux LVM

Code:

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/myserver-root
                      260G  229G  19G  93% /
varrun                16G  416K  16G  1% /var/run
varlock                16G    0  16G  0% /var/lock
udev                  16G  60K  16G  1% /dev
devshm                16G    0  16G  0% /dev/shm
/dev/cciss/c0d0p1    236M  45M  179M  21% /boot

Code:

$ sudo pvdisplay
  --- Physical volume ---
  PV Name              /dev/cciss/c0d0p5
  VG Name              myserver
  PV Size              273.16 GB / not usable 1.66 MB
  Allocatable          yes (but full)
  PE Size (KByte)      4096
  Total PE              69929
  Free PE              0
  Allocated PE          69929
  PV UUID              GkBOI5-RYsh-DE3C-AiQy-BTFS-CT6p-06BjPk
 
$ sudo vgdisplay
  --- Volume group ---
  VG Name              myserver
  System ID           
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                2
  Open LV              2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size              273.16 GB
  PE Size              4.00 MB
  Total PE              69929
  Alloc PE / Size      69929 / 273.16 GB
  Free  PE / Size      0 / 0 
  VG UUID              Hx7f8A-RPdh-rzMy-Ad6s-ODr3-Sidy-fDc1bk
 
$ sudo lvdisplay
  --- Logical volume ---
  LV Name                /dev/myserver/root
  VG Name                myserver
  LV UUID                VwhjXn-4fAy-Wn7i-esxa-Ndi1-eOw5-KfyiEV
  LV Write Access        read/write
  LV Status              available
  # open                1
  LV Size                262.05 GB
  Current LE            67086
  Segments              1
  Allocation            inherit
  Read ahead sectors    0
  Block device          254:0
 
  --- Logical volume ---
  LV Name                /dev/myserver/swap_1
  VG Name                myserver
  LV UUID                AFgNWg-Lh52-BqKc-fZyV-N6z7-fc2n-8aILgZ
  LV Write Access        read/write
  LV Status              available
  # open                2
  LV Size                11.11 GB
  Current LE            2843
  Segments              1
  Allocation            inherit
  Read ahead sectors    0
  Block device          254:1


xeleema 10-16-2009 05:38 AM

Logical Volume Manager
 
Greetingz!

From what I can tell, you're using a Hardware RAID controller on the HP server, so the MD tools ("mdadm" command) haven't been used to setup the physical volumes imported into your volume group. (The "mdadm" command is typically used to setup a software-based RAID 0/1/0+1/5 array). So this seems to be an "LVM2-only" setup.

You're on the right track, but you need;
4. Extend the filesystem with "resize2fs"

With that said, I'm sure it goes without saying that you should probably make sure you have a current working backup of the whole server, just in case things go completely haywire.

Word to the wise, if this is "LVM2" and not "Veritas StorageFoundation on Linux", then you're going to have to unmount the filesystem that you want to extend. (Check the output of the "mount" command to see if you have something other than ext2, ext3, proc, sys, usbfs, or tmpfs filesystems).

There's a difference between extending the physical volume with "pvresize", extending the logical volume with "lvresize", and extending the filesystem with "resize2fs".

You can modify physical volumes, volume groups (if needed, but not in this case), and logical volumes without unmounting things. However, resizing filesystems (with the exception of vxfs from Veritas) requires said filesystem to be *unmounted*.

NOTE: You said this is the "root" logical volume you're trying to expand.
Most of us stopped lumping systems into one big filesystem ages ago.

Typically, I cut eight logical volumes from one volume group for just the system itself. I'll then create another volume group for "user data" if I have to give them room to play;


Code:

sysop@hydra:mnt $ df -hFext4
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg0-lv0  248M  170M  77M  69% /
/dev/md0              63M  32M  31M  51% /boot
/dev/mapper/vg0-lv2  6.4G  4.9G  1.6G  76% /usr
/dev/mapper/vg0-lv3  2.1G  107M  2.0G  6% /var
/dev/mapper/vg0-lv4  567M  17M  545M  3% /usr/local
/dev/mapper/vg0-lv5  567M  17M  545M  4% /opt
/dev/mapper/vg0-lv6  2.1G  113M  1.9G  6% /home
/dev/mapper/vg0-lv7  2.1G  1.1G  967M  54% /tmp

Side Note #1: "lv1" (logical volume one) is used for swap.
Side Note #2: /dev/md0 is a software-based RAID1 created with MD tools.


P.S: More than likely, you're going to have to boot off of a CD or DVD in order to resize that filesystem.

bortbortresson 10-16-2009 06:13 AM

Hi Xeleema

Your right, it is a hardware raid using an "LVM2-only" setup. I'll give it a go using resize2fs from a live disk. Thanks for the advice on splitting the file system. As far as I'm aware our current setup was the default Ubuntu configuration (could be wrong though...)

I'll let you know how I get on

Cheers

Bort

bortbortresson 10-16-2009 11:24 AM

Unsuccesful pvresize
 
Still struggling with this. Extending /dev/cciss/c0d0p2 was fine. fdisk now shows:

Code:

$ sudo fdisk -l

Disk /dev/cciss/c0d0: 440.3 GB, 440342896640 bytes
255 heads, 63 sectors/track, 53535 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000af15d

          Device Boot      Start        End      Blocks  Id  System
/dev/cciss/c0d0p1  *          1          31      248976  83  Linux
/dev/cciss/c0d0p2              32      53535  429770880    5  Extended
/dev/cciss/c0d0p5              32      35690  286430886  8e  Linux LVM

However, "pvresize /dev/cciss/c0d0p5" doesn't increase the size of the lvm partition. Do I have to format the unallocated space before pvresize can extend into it or is there something else simple I missing.

Cheers

Bort

xeleema 10-17-2009 03:00 AM

Hm, did the "pvresize" command give you an error?

To verify that it was successful, you need to check the output of "pvdisplay /path/to/device", rather than "fdisk -l".

Remember, a Physical Volume is LVM2 terminology referring to either a chunk of a disk (a partition), or a whole disk. It doesn't have any tie-in with what "fdisk" thinks disks are.

The "fdisk" command will just show you how disks are divided up at the "partitions" level, based off of information the kernel (via the appropriate modules) was able to read.

Think of it like a stack, you're at the top ("User") and at the bottom is the actual hard disk itself. LVM2 adds the following layers;




"User"

"Shell"

"Filesystems"

"Logical Volumes" (LVM2)

"Volume Groups" (LVM2)

"Physical Volumes" (LVM2)

"Partitions"

"Kernel"

"Hard Drive"

P.S: remember, you're going to have to unmount whatever filesystem you run "resize2fs" against.

bortbortresson 10-17-2009 04:56 AM

Hi Xeleema

There were no errors from pvresize. I did check pvdisplay (managed not to post it). Output is below, but is the same as before running the pvresize command.

Cheers

Bort

Code:

$ sudo pvresize /dev/cciss/c0d0p5
  Physical volume "/dev/cciss/c0d0p5" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

Code:

$ sudo pvdisplay /dev/cciss/c0d0p5
  --- Physical volume ---
  PV Name              /dev/cciss/c0d0p5
  VG Name              myserver
  PV Size              273.16 GB / not usable 1.47 MB
  Allocatable          yes (but full)
  PE Size (KByte)      4096
  Total PE              69929
  Free PE              0
  Allocated PE          69929
  PV UUID              GkBOI5-RYsh-DE3C-AiQy-BTFS-CT6p-06BjPk


xeleema 10-17-2009 05:31 AM

Ah, okay, i think I know where the confusion is coming from.

Just to be clear, "fdisk" and the LVM2 commands don't really have anything to do with each other.
So when "fdisk" tells you a partition is "Extended", that doesn't mean that "pvresize" "extended" the partition.

As far as "fdisk" output is concerned, you have 2 kinds of partitions on a disk with a DOS label (trust me, you have a DOS label).

There's "Primary" partitions, and "Extended" partitions. As far as "fdisk" is concerned, the only difference between those two is that an "Extended" partition can have multiple partitions stacked inside of it. Again, this is a feature of a DOS disk label, been around since '92 I think. Nothing at all to do with LVM2.

Now, let me compare the "fdisk" outputs from the beginning of this thread, and just now.

"fdisk" Starting Out:
Code:

Disk /dev/cciss/c0d0: 440.3 GB, 440342896640 bytes
Disk identifier: 0x000af15d

          Device Boot      Start        End      Blocks  Id  System
/dev/cciss/c0d0p1  *          1          31      248976  83  Linux
/dev/cciss/c0d0p2              32        35690  286430917+  5  Extended
/dev/cciss/c0d0p5              32      35690  286430886  8e  Linux LVM


"fdisk" Output Now:

Code:

Disk /dev/cciss/c0d0: 440.3 GB, 440342896640 bytes
Disk identifier: 0x000af15d

          Device Boot      Start        End      Blocks  Id  System
/dev/cciss/c0d0p1  *          1          31      248976  83  Linux
/dev/cciss/c0d0p2              32      53535  429770880    5  Extended
/dev/cciss/c0d0p5              32      35690  286430886  8e  Linux LVM

So with that said, c0d0p5 lives inside of c0d0p2. Hm. but c0d0p5 hasn't changed sizes, because "pvresize" doesn't affected partitions at this level (see previous stack example).

So, the solution might be simpler than we thought.

1. Create an additional partition with fdisk inside of c0d0p2. You will have to specify "Extended" rather than "Primary"

2. Check your "fdisk -l" output for the device name of the new partition. Be sure to set the "type" of the partition to "Linux LVM" (the same as c0d0p5).

3. Use "pvcreate" on that new device.

4. Use "vgextend" to add the new physical volume to your volume group.

5. Confirm that the size of your volume group has expanded with "vgdisplay".

6. Use "lvextend" to make your logical volume(s) bigger.

7. Use "resize2fs" to make your filesystems bigger.
NOTE: You will have to unmount whichever filesystems those would be. Remember that you have only one filesystem per logical volume. Also, if this is the "root" filesystem, you will have to boot off of a CD/DVD with LVM support prior to doing this, as the system will not let you resize a filesystem 'hot' (while it's mounted).

It's also been my experience that "resize2fs" wants a fsck done to the filesystem before it runs, regardless of it's clean/dirty bit state.

Sound like a plan?

bortbortresson 10-17-2009 11:44 AM

Quote:

Originally Posted by xeleema (Post 3722601)
Sound like a plan?

Yup. Thanks for all the explanations. I'll let you know how I get on with it.

Cheers

Bort

xeleema 10-18-2009 12:37 AM

*BUMP*

So how's it going?

bortbortresson 10-18-2009 05:17 PM

Worked brilliantly so far! Am away from the server so will do the resize2fs bit tomorrow when I can stick the live disk in.

Cheers

Bort

Quote:

Originally Posted by xeleema (Post 3723445)
*BUMP*

So how's it going?


xeleema 10-19-2009 02:01 AM

Excellent! If you have any more questions, feel free to post back to this thread.
Otherwise, if you could click the blue thumb at the bottom of any posts that helped you out, I'd appreciate it!

bortbortresson 10-19-2009 06:35 AM

All done. From the live disk I had to:

Code:

$ sudo apt-get install lvm2

$ sudo modprobe dm_mod

$ sudo vgchange -a y

$ sudo e2fsck -f /dev/myserver/root

$ sudo resize2fs /dev/myserver/root

There's a problem when you "modprobe" using the ubuntu jaunty live cd, but the ibex one works fine.

Thanks for all the help xeleema. Your advice has been excellent.

Cheers

Bort

Quote:

Originally Posted by xeleema (Post 3724554)
Excellent! If you have any more questions, feel free to post back to this thread.
Otherwise, if you could click the blue thumb at the bottom of any posts that helped you out, I'd appreciate it!


xeleema 10-19-2009 06:38 AM

That's great news, Bort!
Power to the Penguins!

slmingol 12-27-2009 11:59 PM

resize2fs
 
Just to update this thread, as of linux kernel 2.6.11 and with e2fsprogs package > 1.39.1 you can now resize an ext3 while it's mounted. I just recently did this on a raid1 mirror (2 drives 300GB + 500GB) where I upgraded one of the members from 300GB to 1TB. Once done I ran resize2fs and it worked flawlessly.

The output looks like this when it's resizing the share:

Code:

% resize2fs /dev/lvm-raid/lvm0
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/lvm-raid/lvm0 is mounted on /export/raid1; on-line resizing
required
Performing an on-line resize of /dev/lvm-raid/lvm0 to 122095616 (4k) blocks.
The filesystem on /dev/lvm-raid/lvm0 is now 122095616 blocks long.



All times are GMT -5. The time now is 01:27 PM.