Quote:
Originally Posted by jamoody
c help.
I have a logical volume with 2 physical volumes, sda1 and sdb1,
|
Actually, you have a 'volume group' with 2 physical volumes.
Within that volume group you have 2 logical volumes
LogVol00 - your 10GB rootfs
LogVol01 - your 1.8TB /storage filesystem at 32% used.
As you've fully allocated you 1.8TB to the filesystem, pvmove isn't going to help at this stage, you've effectively boxed yourself in.
Assuming I've read your df correctly (using code tags when you post helps no end), you've used about 534G of /storage. If your /storage filesystem is a ext2 or ext3 then you may be able to get yourself out of this, but
I'd consider it a high-risk change, so a good backup would be in order.
- unmount /storage
- use the resize2fs command to resize /dev/mapper/VolGroup00-LogVol01, specifying a size somewhere around 590G.
- use lvreduce command to reduce the /dev/mapper/VolGroup00-LogVol01 to a little above that, say 600G to allow a good margin for safety.
- run resize2fs on LogVol01 again without a size parameter to syncronise the end of the filesystem with the end of the resized logical volume.
- Run pvmove to move stuff off of the disk you want to remove (there should be space now).
- Check there's nothing on the disk with pvdisplay and if all is ok remove your disk from the volume group with vgreduce.
- Say a prayer to the gods of data storage systems and try to remount /storage.
If you decide to follow this outline, ensure you research it fully and make sure that you understand exactly what you're doing at each step. This is just theory, I've never tried this.
Even for someone fully comfortable with LVM, this is going to be a risky operation and you could easily say goodbye to all your data.
Best of luck.