LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (http://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Removing Physical Disk from LVM via pvmove (http://www.linuxquestions.org/questions/linux-newbie-8/removing-physical-disk-from-lvm-via-pvmove-707477/)

jamoody 02-25-2009 11:57 AM

Removing Physical Disk from LVM via pvmove
 
I'm hoping someone here can give a LVM newbie some basic help.


I have a logical volume with 2 physical volumes, sda1 and sdb1, both 1T. I want to permanently remove sdb1 so I can move it to another system. I've manually moved much of the data over to another drive not in the logical volume, leaving more than a drive's worth of free space in the logical volume. I try to evacuate sdb1 via pvmove but I get an error message that there are no extents available:

pvmove /dev/sdb1
No extents available for allocation


df shows the logical volume has 1.2T available so I would think I should be able to evacuate sdb1:

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
9.7G 3.0G 6.3G 32% /
/dev/mapper/VolGroup00-LogVol01
1.8T 534G 1.2T 32% /storage
/dev/sda1 190M 13M 168M 7% /boot
tmpfs 1.7G 0 1.7G 0% /dev/shm



I'm thinking that I may need to shrink the filesystem somehow before the pvmove and subsequent vgreduce, but I don't know how. And I really don't want to experiment and lose my data.

pvdisplay -m reports:

--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 931.32 GB / not usable 7.11 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 29802
Free PE 0
Allocated PE 29802
PV UUID 0uKTxg-SCd9-NA1Z-BYt5-e2vl-fQ72-nrXywf

--- Physical Segments ---
Physical extent 0 to 29801:
Logical volume /dev/VolGroup00/LogVol01
Logical extents 0 to 29801

--- Physical volume ---
PV Name /dev/sdb1
VG Name VolGroup00
PV Size 931.51 GB / not usable 11.19 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 29808
Free PE 2
Allocated PE 29806
PV UUID Mlvnwl-YJ1B-s7Fk-Kz7F-mf5G-1Fvk-BKXIN6

--- Physical Segments ---
Physical extent 0 to 319:
Logical volume /dev/VolGroup00/LogVol00
Logical extents 0 to 319
Physical extent 320 to 29743:
Logical volume /dev/VolGroup00/LogVol01
Logical extents 29802 to 59225
Physical extent 29744 to 29805:
Logical volume /dev/VolGroup00/LogVol02
Logical extents 0 to 61
Physical extent 29806 to 29807:
FREE

Thanks for any pointers.

GazL 02-26-2009 06:20 AM

Quote:

Originally Posted by jamoody (Post 3457175)
c help.
I have a logical volume with 2 physical volumes, sda1 and sdb1,

Actually, you have a 'volume group' with 2 physical volumes.
Within that volume group you have 2 logical volumes

LogVol00 - your 10GB rootfs
LogVol01 - your 1.8TB /storage filesystem at 32% used.

As you've fully allocated you 1.8TB to the filesystem, pvmove isn't going to help at this stage, you've effectively boxed yourself in.



Assuming I've read your df correctly (using code tags when you post helps no end), you've used about 534G of /storage. If your /storage filesystem is a ext2 or ext3 then you may be able to get yourself out of this, but I'd consider it a high-risk change, so a good backup would be in order.
  1. unmount /storage
  2. use the resize2fs command to resize /dev/mapper/VolGroup00-LogVol01, specifying a size somewhere around 590G.
  3. use lvreduce command to reduce the /dev/mapper/VolGroup00-LogVol01 to a little above that, say 600G to allow a good margin for safety.
  4. run resize2fs on LogVol01 again without a size parameter to syncronise the end of the filesystem with the end of the resized logical volume.
  5. Run pvmove to move stuff off of the disk you want to remove (there should be space now).
  6. Check there's nothing on the disk with pvdisplay and if all is ok remove your disk from the volume group with vgreduce.
  7. Say a prayer to the gods of data storage systems and try to remount /storage.

If you decide to follow this outline, ensure you research it fully and make sure that you understand exactly what you're doing at each step. This is just theory, I've never tried this.

Even for someone fully comfortable with LVM, this is going to be a risky operation and you could easily say goodbye to all your data.

Best of luck.

daudi 07-07-2012 01:33 PM

I know this is an old thread, but just in case it is useful to anyone I tried this and it worked. Here's what I did:

Code:

e2fsck -f /dev/mapper/jua-root
resize2fs /dev/mapper/jua-root 100G
lvreduce /dev/mapper/jua-root -L 110G
resize2fs /dev/mapper/jua-root
pvmove /dev/mapper/wd (I think)
vgreduce jua /dev/mapper/wd

where jua is the volume group, wd is the luksOpened new disk. I then rebooted and it worked. I made sure I had a good backup before I tried this.


All times are GMT -5. The time now is 11:24 PM.