LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 06-12-2012, 07:08 AM   #1
Arodef
Member
 
Registered: Apr 2004
Distribution: Centos, Fedora
Posts: 125

Rep: Reputation: 17
Need help shrinking a logical volume and then removing a physical disk from it


I initially created my logical volume from two identically sized hard disks: /dev/sdb and /dev/sdc. It was a linear volume. I now want to shrink that logical volume to the size of /dev/sdb only and then remove the disk /dev/sdc so I can use it on another computer. I have not filled up the drive so the data does not exceed the space on a single drive.

I just want to make sure I don't lose any data and screw anything up. What are the commands I need to shrink the current logical volume down to the size of only /dev/sdb? How do I find the size of /dev/sdb and specify that size to shrink it? How can I then safely remove /dev/sdc from the logical volume and then remove it from the system? What I do I need to be careful of?

BTW, here's the commands I used to create the volume from the two disk:

pvcreate /dev/sdb
pvcreate /dev/sdc
vgcreate -s 32M MyVolGroup /dev/sdb /dev/sdc
lvcreate -n LOGVOLUME -l 131070 MyVolGroup
mkfs.ext3 /dev/MyVolGroup/LOGVOLUME

Thanks for any tips!

Last edited by Arodef; 06-12-2012 at 07:15 AM.
 
Old 06-12-2012, 08:18 AM   #2
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
First you need to verify that the size of LOGVOLUME is less than or equal to the size of one disk:

pvdisplay /dev/sdb (and/or pvdisplay /dev/sda) will show you the size and physical extents (PE) in each phyiscal volume (PV) of the Volume Group (VG).

lvdisplay /dev/MyVolGroup/LOGVOLUME will show you the size and extents of the logical volume (LV).

df -h can be used to see how large the filesystem on the LV is. If filesystem's USED space is more than one physical disk then you can NOT do what you want. If USED is less than one physical disk but allocated is larger than one physical disk you can do it but you first have to change the size of the filesystem.

To shrink a filesystem you must:
Unmount the filesystem.
Run e2fsck on it to verify it is OK.
Run resize2fs to set it to the size you want.

Once the filesystem is smaller than one disk you can reduce the LV size using lvreduce command.

You can then use the pvmove command to move all extents from one disk to the other to insure they are all on one disk.

Once the LV is smaller than one disk and all extents are on one disk you can remove the second disk by using the vgreduce command.

Each of the commands has a man page for more details - just type "man <command>" for exact syntax to use. You didn't post the size of your disks so I didn't want to give you exact commands.
 
1 members found this post helpful.
Old 06-12-2012, 08:29 PM   #3
Arodef
Member
 
Registered: Apr 2004
Distribution: Centos, Fedora
Posts: 125

Original Poster
Rep: Reputation: 17
Thanks a lot for the detailed answer MensaWater. Please see the details on my config at the very bottom.

Can I double check the exact commands with you as I don't want to risk losing data by screwing this up.


1. resize2fs
Since I want to resize the LV to the size of of one the hard drives (/dev/sdb1), I determine the number of blocks in /dev/sdb1 by running fdisk (see below). That gives me:
Code:
Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267349  2147480811   8e  Linux LVM
So is the correct command:
Code:
resize2fs /dev/MyVolGroup/LOGVOLUME 2147480811
Also do you think I should run e2fsck again after this command just to be safe?

2. lvreduce
Since I'm removing one of the two 2TB disks, I was thinking of using the option -L-2t to take 2TB off but I'm not sure if that could producing rounding errors. If you see my original lvcreate, I specified it in extents. Therefore I'll specify extents here and reduce it by the number of extents in /dev/sdc1, 65535. So is this command correct:
Code:
lvreduce -l-65535 /dev/MyVolGroup/LOGVOLUME
BTW I read another post here: http://www.linuxquestions.org/questi...pvmove-707477/ and it suggests resizing the LV a little larger than what you resized the filesystem with in the previous resize2fs to make sure reduced the LV can fit the filesystem. Then he suggests running resize2fs again to "to syncronise the end of the filesystem with the end of the resized logical volume." All that confused me, is all that necessary?

3.pvmove
Run the following command to see how the LV is distributed across the two drives:
Code:
pvs -o+pv_used
. Then run
Code:
pvmove /dev/sdc1
to move the extents off sdc1. Then rerun the previous pvs command to verify the extents were moved.
4. vgreduce
Assuming the LV data is all safely on /dev/sdb1, remove /dev/sdc1 with this command:
Code:
vgreduce /dev/MyVolGroup/LOGVOLUME /dev/sdc1
5. pvremove
Tracing back my steps of creating this LV, it seems I should do this command to remove /dev/sdc1 since I want to remove the disk from this system. Here's the command:
Code:
pvremove /dev/sdc1
Thanks again for your help MensaWater. I hope this helps others with the same problems as me.

Code:
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               MyVolGroup
  PV Size               2.00 TB / not usable 29.23 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              65535
  Free PE               0
  Allocated PE          65535
  PV UUID               0QQkOv-7T6Z-XcCX-0rmC-0Psy-eHxe-7S4tom

  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               MyVolGroup
  PV Size               2.00 TB / not usable 29.23 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              65535
  Free PE               0
  Allocated PE          65535
  PV UUID               8NqKGp-TwoZ-WjjL-MI8L-s0Xn-iyGJ-d7ZmRQ

  # vgdisplay
  --- Volume group ---
  VG Name               MyVolGroup
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               4.00 TB
  PE Size               32.00 MB
  Total PE              131070
  Alloc PE / Size       131070 / 4.00 TB
  Free  PE / Size       0 / 0
  VG UUID               a9Mhpz-1vyM-ilid-2WcK-9sP9-NDjq-XKDQuq

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/MyVolGroup/LOGVOLUME
  VG Name                MyVolGroup
  LV UUID                6ZlkH6-fhEM-IxAI-ka2X-dt97-JzIy-J4xgZX
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                4.00 TB
  Current LE             131070
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2



# fdisk /dev/sdb

The number of cylinders for this disk is set to 267349.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 2199.0 GB, 2199023254528 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267349  2147480811   8e  Linux LVM

Command (m for help): q

# fdisk /dev/sdc

The number of cylinders for this disk is set to 267349.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdc: 2199.0 GB, 2199023254528 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      267349  2147480811   8e  Linux LVM

Command (m for help): q

Last edited by Arodef; 06-13-2012 at 05:17 AM.
 
Old 06-13-2012, 07:26 AM   #4
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Step 1 isn't correct.

Although your partition is an "LVM" partition that stands for Logical Volume Manager. Your filesystem is in an LV (Logical Volume) which is in the VG (Volume Group) which is using the PVs (Physical Volumes) which are your partitions. While you need to know how big all these things are you can't use the partition size as your basis for the resize solely.

You have to run df -h on the filesystem to see both what is allocated and what is used.

Since your LV is 4 TB it is likely that your filesystem is APPROXIMATELY the same size because it is usual to make the filesystem use the entire space of the "device" (which is the LV in this case) assigned to it. However, that isn't required. Also as noted it is only an APPROXIMATE size. The filesystem has overhead so is typically a little smaller than the device (LV in your case) it sits on.

Also if you look at your other VG and PV output you'll notice that the PVs and therefore the VG comprised of them has "unusable space" as compared to the disk partitions. You can't use the fdisk output as basis for resizing - you should instead rely upon the df command for filesystem and the LVM commands (lvdisplay, vgdisplay) for the resize of the logical volume. For resizing you don't need to look at fdisk at all.
 
Old 06-13-2012, 08:41 AM   #5
Arodef
Member
 
Registered: Apr 2004
Distribution: Centos, Fedora
Posts: 125

Original Poster
Rep: Reputation: 17
df -h reports the following for my LV:

/dev/mapper/MyVolGroup-LOGVOLUME
4.0T 1.3G 3.8T 1% /mnt/external

So I know it can fit it on /dev/sdb1 as that's a 2TB drive. I'd like the filesystem to use all the available space on /dev/sdb1. Therefore I'm not quite sure what size argument to specify to resize2fs. Since I want the file system to reside on sdb1, I thought I needed to use fdisk to get stats on that disk?

Since df -h reports 1.3GB is used, I think I can do this:
Code:
resize2fs /dev/MyVolGroup/LOGVOLUME 2G
Since there's 1.3GB used, 2GB is enough. Then run the above lvreduce command. Since I know the number of physical extents on sdb1 and sdc1, lvreduce ensures that the LV will use all the PEs on sdb1.

Then I should run resize2fs again with no size argument? By doing that it will use the full size of the LV partition which should be around 2TB? I think it's starting to make sense now.

Code:
resize2fs /dev/MyVolGroup/LOGVOLUME

Last edited by Arodef; 06-13-2012 at 09:07 AM.
 
Old 06-13-2012, 08:55 AM   #6
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Its a good idea to resize it smaller as you've indicated. 1.7 TB is probably safer than 1.5 TB.

Your order is good.
 
1 members found this post helpful.
Old 06-14-2012, 12:59 PM   #7
Arodef
Member
 
Registered: Apr 2004
Distribution: Centos, Fedora
Posts: 125

Original Poster
Rep: Reputation: 17
Thanks, I did the steps and everything seems to work properly!

The only things I didn't quite understand were I thought the pvs command would tell me exactly how much data from the LV was being used on each physical volume in the group. Instead it basically said all the PEs on each PVs were assigned, none free. So it report PE assignment rather than actual data usage stats, ie these PEs on this PV holds data. Then after doing lvreduce, all the PEs on sdc1 were free. The pvmove command didn't move any PEs off sdc1.

Last edited by Arodef; 06-14-2012 at 02:54 PM.
 
1 members found this post helpful.
Old 06-14-2012, 01:25 PM   #8
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
When you do pvcreate then later do vgcreate to include PVs you're assigning the entire PV to the VG so all extents are allocated in that regard. When you do a basic layout with multiple PVs unless you specify a layout type that requires interleaving or specify which PV you want used for an LV it is apt to put extents on any PV it finds. However, the default is "concatenated" which means it would typically use first one PV then the other. In your case however you made an LV that used the entire VG so it HAD to put extents on both PVs to do that. When you then did the lvreduce you were only telling it the SIZE of the LV and not the LOCATION. This is why I emphasized the pvmove - it changes the LOCATION of the extents.

Glad it worked for you. Please go to thread tools and mark this as "Solved". It helps others in future find solutions more quickly in web searches.
 
1 members found this post helpful.
Old 06-14-2012, 03:37 PM   #9
Arodef
Member
 
Registered: Apr 2004
Distribution: Centos, Fedora
Posts: 125

Original Poster
Rep: Reputation: 17
In my case though, the 'pvmove /dev/sdc1' didn't work, it gave me the error message not enough free/allocatable physical extents. So does this mean my system may not have been properly resized? I ignored that message since I assume all the actual data resides on the PEs on /dev/sdb1.

This is how I understand it: From pvdisplay, there were a total of 65535 PEs for sdb1 and for sdc1, all PEs allocated, none free. This gives you a total of 131070 PEs for the two drives. My LV was initially configured to use all of these PEs as you can see from my original lvcreate command in the first post.

Before I did the lvreduce, the pvs -o+pv_used command reported something like this (I'm typing this from memory):

Code:
  
  PV         VG         Fmt  Attr PSize  PFree Used  
  /dev/sdb1  MyVolGroup lvm2 a-    2.00T    0   2.00T
  /dev/sdc1  MyVolGroup lvm2 a-    2.00T    0   2.00T
I used lvreduce to decrease the size of it to 65535 PEs, the size of only one drive. As you said this changed the size of the LV but not location. But when I used the pvs command again after lvreduce, it now said, again from memory:

Code:
  
  PV         VG         Fmt  Attr PSize  PFree Used  
  /dev/sdb1  MyVolGroup lvm2 a-    2.00T    0   2.00T
  /dev/sdc1  MyVolGroup lvm2 a-    2.00T 2.00T    0
So when I did the pvmove /dev/sdc1 and got that error message, I assumed it was because 0 extents were used as reported by pvs so I didn't worry about it. So it appears the location of the extents did change as well with lvreduce without having to do pvmove? Perhaps since lvcreate used the default 'concatenate' method, doing the lvreduce automatically just removed all extends from the last drive, sdc1, so the pvmove wasn't necessary?

Last edited by Arodef; 06-14-2012 at 03:39 PM.
 
Old 06-14-2012, 04:13 PM   #10
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
It makes sense that in a concatenated setup an lvreduce to or below the size of 1 disk might simply eliminate all the extents on the second disk it had concatenated to because it had never written to those extents on the second disk. If the USED portion of the filesystem never went beyond the size of one disk even though the ALLOCATED did it would have no reason to try to write ahead in a concatenated setup. There simply wasn't a guarantee that it never did.

I've not seen that error on pvmove before but I suspect you're right in your assumptions due to the foregoing. If you were able to do a full fsck the filesystem with no issues after you did the vgreduce to remove the second disk it means there isn't any issue with it.
 
1 members found this post helpful.
Old 06-14-2012, 04:13 PM   #11
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
It makes sense that in a concatenated setup an lvreduce to or below the size of 1 disk might simply eliminate all the extents on the second disk it had concatenated to because it had never written to those extents on the second disk. If the USED portion of the filesystem never went beyond the size of one disk even though the ALLOCATED did it would have no reason to try to write ahead in a concatenated setup. There simply wasn't a guarantee that it never did.

I've not seen that error on pvmove before but I suspect you're right in your assumptions due to the foregoing. If you were able to do a full fsck the filesystem with no issues after you did the vgreduce to remove the second disk it means there isn't any issue with it.
 
1 members found this post helpful.
Old 06-14-2012, 04:13 PM   #12
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
It makes sense that in a concatenated setup an lvreduce to or below the size of 1 disk might simply eliminate all the extents on the second disk it had concatenated to because it had never written to those extents on the second disk. If the USED portion of the filesystem never went beyond the size of one disk even though the ALLOCATED did it would have no reason to try to write ahead in a concatenated setup. There simply wasn't a guarantee that it never did.

I've not seen that error on pvmove before but I suspect you're right in your assumptions due to the foregoing. If you were able to do a full fsck the filesystem with no issues after you did the vgreduce to remove the second disk it means there isn't any issue with it.
 
1 members found this post helpful.
Old 06-14-2012, 05:08 PM   #13
Arodef
Member
 
Registered: Apr 2004
Distribution: Centos, Fedora
Posts: 125

Original Poster
Rep: Reputation: 17
Great, thanks a bunch for helping me understand this MensaWater. I definitely learned a lot.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LVM Mount Physical Volume/Logical Volume without a working Volume Group mpivintis Linux - Newbie 10 01-11-2014 07:02 AM
Removing physical drive from LVM logical volume basscakes Linux - Server 5 07-06-2011 01:03 AM
LVM Physical Volume -- Not Shrinking? CoderMan Linux - Software 4 02-23-2010 06:09 PM
Shrinking a Logical Volume With LVM jimmyjiang Red Hat 1 02-28-2008 04:45 PM
a doubt on shrinking logical volume nirmaltom Linux - General 6 02-14-2007 12:46 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 09:36 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration