I can't remove a pv from LVM it shows no space left but i don't have any data!
I saw another threads about something like this but it wasn't helpful for me.
I have a lvm with 5 disk, I've been doing some benchmarks on the file-system with this lvm know I would like to remove one pv from the volume, I've tried
Thanks so much for your help
AFAIK the only way to remove a physical volume is to assign its extents to another physical volume. This is a serious shortcoming of LVM in my opinion. Your drive sde has 97324 physical extents still assigned to it, which have to be reallocated to another drive, as I know of no way to remove them otherwise. None of your other PVs have any free extents to use for this purpose.
The way LVM works is to use multiple disks as a single volume, so once you have written any data to the volume, the data could be spread over all the disks. This is why you can't wipe just one of the disks.
The most straight forward way to reduce the number of disks in a logical volume would be to buy a bigger disk, and migrate all the extents from 2 existing PVs to the new disk. then you can remove the 2 old disks. LVM seems to be predicated on ever increasing disk capacity.
Another way around the issue (in the future) would be to build the LVM volume onto a raid set, so that the raid takes care of keeping the data safe when removing a disk. This also has its pitfalls of course.
Thanks Smoker your response was pretty helpfull, I'm gonna use RAID so I would have mirroring then I will make some test because I would like to know how to restore after a system failure.
In other words, from an LV perspective rather than an extents perspective ...
A PV may be removed from a VG when it is empty, that is contains no LVs or parts of LVs. LVs which are not required can be removed using lvremove. Any parts of LVs which are required can be moved off the PV by the pvmove command.
An LVM LV is equivalent to a HDD partition; deleting all the files in its file system or reformatting it doesn't affect the partition itself.
The OP is equivalent to saying for example "I've removed all the data from /dev/sdb but fdisk -l /dev/sdb still shows partitions".
Yesterday I successfully removed a hard disk from a LVM volume. My LVM of approx 2TB was on two hard disks (PVs) one of 500GB, the other 1.5TB. I wished to remove the 1.5TB disk. Here is what I did.
1) Backup everything.
2) Delete enough files from the filesystem so that everything would fit on the disk I was leaving behind. To be save I reduced files to under 400GB.
3) Boot from a CD ROM (my LVM volume was the root filesystem so I couldn't just unmount it). I used the Ubuntu server boot disk and entered the recovery command line console.
4) check the disk for errors... e2fsk -f /dev/VolGrp/root
5) Resize the filesystem on the logical volume to fit the remaining disk... resize2fs -p /dev/VolGrp/root 450G
6) Resize the logical volume to be slightly bigger than the filesystem (just in case)... lvm lvresize -L 455G /dev/VolGrp/root
7) Align the end of the filesystem with the end of the LV... resize2fs -p /dev/VolGrp/root
8) run lvm pvdisplay to check the "allocated PE" on the disk you want to remove. It needs to be zero. As I want to remove /dev/sdb1 I was lucky, as everything moved onto /dev/sda1 so my allocated PEs were zero on the disk I wanted to remove. If it is not zero, then you either have to use pvmove or do steps 5 to 7 again.
9) Assuming allocated PEs are now zero... lvm vgreduce /dev/VolGrp/root /dev/sdb1
and you are done. I was able to remove /dev/sdb1 physically from the system and reboot.
|All times are GMT -5. The time now is 11:36 AM.|