I can't remove a pv from LVM it shows no space left but i don't have any data!
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I can't remove a pv from LVM it shows no space left but i don't have any data!
Hello all,
I saw another threads about something like this but it wasn't helpful for me.
I have a lvm with 5 disk, I've been doing some benchmarks on the file-system with this lvm know I would like to remove one pv from the volume, I've tried
Code:
# pvmove -v /dev/sde1
Finding volume group "test-vol"
No extents available for allocation
and also
Code:
# pvremove -ff /dev/sde1
Really WIPE LABELS from physical volume "/dev/sde1" of volume group "test-vol" [y/n]? y
WARNING: Wiping physical volume label from /dev/sde1 of volume group "test-vol"
Can't open /dev/sde1 exclusively - not removing. Mounted filesystem?
I have no data on the volume, but when I run pvdisplay it shows this
Code:
--- Physical volumes ---
PV Name /dev/sda1
PV UUID A5ljkF-gUm9-kLy9-F7Hs-09BF-3Sb9-8P5gre
PV Status allocatable
Total PE / Free PE 119234 / 0
PV Name /dev/sdb1
PV UUID 03CsaT-lSBX-UpWM-wrKx-AF7B-GMkh-oGXWDg
PV Status allocatable
Total PE / Free PE 119234 / 0
PV Name /dev/sdc1
PV UUID 1toYE5-i3Bq-1ctT-pJdl-ALQa-IBvY-2e0oLa
PV Status allocatable
Total PE / Free PE 119234 / 0
PV Name /dev/sdd1
PV UUID 1IfPhl-EIfB-1P7r-Apr0-bpPM-RFKW-SwTsNO
PV Status allocatable
Total PE / Free PE 119234 / 0
PV Name /dev/sde1
PV UUID zq9NMn-8R3o-cs8S-fZdl-ITBv-AR9t-LOrDfj
PV Status allocatable
Total PE / Free PE 119234 / 21910
I don't understand why it is full because I've removed all the data.
AFAIK the only way to remove a physical volume is to assign its extents to another physical volume. This is a serious shortcoming of LVM in my opinion. Your drive sde has 97324 physical extents still assigned to it, which have to be reallocated to another drive, as I know of no way to remove them otherwise. None of your other PVs have any free extents to use for this purpose.
The way LVM works is to use multiple disks as a single volume, so once you have written any data to the volume, the data could be spread over all the disks. This is why you can't wipe just one of the disks.
The most straight forward way to reduce the number of disks in a logical volume would be to buy a bigger disk, and migrate all the extents from 2 existing PVs to the new disk. then you can remove the 2 old disks. LVM seems to be predicated on ever increasing disk capacity.
Another way around the issue (in the future) would be to build the LVM volume onto a raid set, so that the raid takes care of keeping the data safe when removing a disk. This also has its pitfalls of course.
Thanks Smoker your response was pretty helpfull, I'm gonna use RAID so I would have mirroring then I will make some test because I would like to know how to restore after a system failure.
AFAIK the only way to remove a physical volume is to assign its extents to another physical volume.
More specifically, it must be empty and so any required extents must be moved (= assigned) to another PV; the others can simply be removed by removing the associated LV(s).
In other words, from an LV perspective rather than an extents perspective ...
A PV may be removed from a VG when it is empty, that is contains no LVs or parts of LVs. LVs which are not required can be removed using lvremove. Any parts of LVs which are required can be moved off the PV by the pvmove command.
Last edited by catkin; 06-09-2010 at 12:14 AM.
Reason: clarity - removed "within the VG" regards pvmove
Yesterday I successfully removed a hard disk from a LVM volume. My LVM of approx 2TB was on two hard disks (PVs) one of 500GB, the other 1.5TB. I wished to remove the 1.5TB disk. Here is what I did.
1) Backup everything.
2) Delete enough files from the filesystem so that everything would fit on the disk I was leaving behind. To be save I reduced files to under 400GB.
3) Boot from a CD ROM (my LVM volume was the root filesystem so I couldn't just unmount it). I used the Ubuntu server boot disk and entered the recovery command line console.
4) check the disk for errors... e2fsk -f /dev/VolGrp/root
5) Resize the filesystem on the logical volume to fit the remaining disk... resize2fs -p /dev/VolGrp/root 450G
6) Resize the logical volume to be slightly bigger than the filesystem (just in case)... lvm lvresize -L 455G /dev/VolGrp/root
7) Align the end of the filesystem with the end of the LV... resize2fs -p /dev/VolGrp/root
8) run lvm pvdisplay to check the "allocated PE" on the disk you want to remove. It needs to be zero. As I want to remove /dev/sdb1 I was lucky, as everything moved onto /dev/sda1 so my allocated PEs were zero on the disk I wanted to remove. If it is not zero, then you either have to use pvmove or do steps 5 to 7 again.
9) Assuming allocated PEs are now zero... lvm vgreduce /dev/VolGrp/root /dev/sdb1
and you are done. I was able to remove /dev/sdb1 physically from the system and reboot.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.