LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   Free-up space on logical volume under LVM? (https://www.linuxquestions.org/questions/linux-general-1/free-up-space-on-logical-volume-under-lvm-4175694816/)

mn124700 05-06-2021 04:00 PM

Free-up space on logical volume under LVM?
 
I have a logical volume on an SSD drive that is filled to capacity. Both the VG and the PV are shown as full. I'm thinking I should run fstrim to free-up unused space on the LV. However, the fstrim command requires that I specify a mount point and I'm not sure what the mount point is for an LV under LVM.

I tried "fstrim -av", but this did not trim the LV in question. I also tried using "fstrim -v /dev/mapper/myLVM", but this just gives an error saying the specified mount point is not a directory.

So, I'm wondering how to free up space.

Thanks for any advice,
Eric

HTop 05-07-2021 07:28 AM

Are your logical volumes full or filesystems on top of them?
If latter, you should remove some useless files.

mn124700 05-07-2021 08:51 AM

Thanks for the reply. Both "pvs" and "vgs" show zero for free space. If I do an "lvs" and add up the used space of all the LVs, it's far less than the capacity of the SSD drive. So, I'm thinking there are unused files within the VG that need to be discarded.

Am I correct?

Eric

HTop 05-07-2021 09:38 AM

Usually volume group and logical volumes are full because it means that all the space is allocated to such resources. This does not mean that your filesystems are full.
You can get disk space statistics using df -h command.

mn124700 05-07-2021 11:17 AM

I really appreciate your help, as I'm totally confused. The main problem I'm trying to solve is to extend the metadata for a certain VG (satassd), but I get an error...

Code:

root@pve:~# lvextend --poolmetadatasize +1G satassd/data
  Insufficient free space: 256 extents needed, but only 0 available

I thought this was because I was out of space due to files not being discarded from the ssd drive...

Code:

root@pve:~# pvs
  PV            VG      Fmt  Attr PSize    PFree 
  /dev/nvme0n1p3 pve    lvm2 a--  <465.26g <16.00g
  /dev/sdc1      satassd lvm2 a--    <1.82t      0
root@pve:~# vgs
  VG      #PV #LV #SN Attr  VSize    VFree 
  pve      1  21  0 wz--n- <465.26g <16.00g
  satassd  1  11  0 wz--n-  <1.82t      0

Note that the pve volume group has free space, but the satassd VG (the one I'm having trouble with) does not.

Despite this report of zero free space, lvs does not seem to show enough usage to fill the satassd VG...

Code:

root@pve:~# lvs -a
  LV                                VG      Attr      LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                              pve    twi-aotz-- <338.36g                    56.91  3.73                           
  [data_tdata]                      pve    Twi-ao---- <338.36g                                                         
  [data_tmeta]                      pve    ewi-ao----    3.45g                                                         
  [lvol0_pmspare]                    pve    ewi-------    3.45g                                                         
  root                              pve    -wi-ao----  96.00g                                                         
  snap_vm-102-disk-0_OMV5_c          pve    Vri---tz-k  50.00g data vm-102-disk-0                                       
  snap_vm-102-disk-0_OMV5_initial    pve    Vri---tz-k  50.00g data vm-102-disk-0                                       
  snap_vm-102-disk-0_OMV_3_24_21    pve    Vri---tz-k  50.00g data vm-102-disk-0                                       
  snap_vm-102-disk-0_OMV_d          pve    Vri---tz-k  50.00g data vm-102-disk-0                                       
  snap_vm-107-disk-0_PlayOn          pve    Vri---tz-k  50.00g data vm-107-disk-0                                       
  snap_vm-107-disk-0_PlayOn_1        pve    Vri---tz-k  50.00g data vm-107-disk-0                                       
  snap_vm-107-disk-0_PlayOn_3_24_21  pve    Vri---tz-k  50.00g data vm-107-disk-0                                       
  snap_vm-107-disk-0_PlayOn_b        pve    Vri---tz-k  50.00g data vm-107-disk-0                                       
  snap_vm-109-disk-0_Plex3Deb        pve    Vri---tz-k  50.00g data                                                     
  snap_vm-109-disk-0_Plex3Deb_b      pve    Vri---tz-k  50.00g data                                                     
  snap_vm-109-disk-0_Plex_01        pve    Vri---tz-k  50.00g data vm-109-disk-0                                       
  snap_vm-109-disk-0_Plex_02        pve    Vri---tz-k  50.00g data vm-109-disk-0                                       
  snap_vm-109-disk-0_Plex_03        pve    Vri---tz-k  50.00g data vm-109-disk-0                                       
  snap_vm-109-disk-0_Plex_3_24_21    pve    Vri---tz-k  50.00g data vm-109-disk-0                                       
  snap_vm-109-disk-0_Plex_p          pve    Vri---tz-k  50.00g data vm-109-disk-0                                       
  swap                              pve    -wi-ao----    8.00g                                                         
  vm-102-disk-0                      pve    Vwi-aotz--  50.00g data              12.84                                 
  vm-107-disk-0                      pve    Vwi-aotz--  50.00g data              72.98                                 
  vm-109-disk-0                      pve    Vwi-aotz--  50.00g data              90.50                                 
  base-111-disk-0                    satassd Vri-a-tz-k  50.00g data              5.77                                 
  base-112-disk-0                    satassd Vri-a-tz-k  50.00g data              6.34                                 
  data                              satassd twi-aotz--  <1.82t                    6.01  74.61                         
  [data_tdata]                      satassd Twi-ao----  <1.82t                                                         
  [data_tmeta]                      satassd ewi-ao----  100.00m                                                         
  [lvol0_pmspare]                    satassd ewi-------  100.00m                                                         
  snap_vm-104-disk-0_Anaconda3_21_21 satassd Vri---tz-k  50.00g data vm-104-disk-0                                       
  snap_vm-106-disk-0_Ubuntu3_21_21  satassd Vri---tz-k  50.00g data vm-106-disk-0                                       
  snap_vm-113-disk-0_ZM3_21_21      satassd Vri---tz-k  50.00g data vm-113-disk-0                                       
  vm-100-disk-0                      satassd Vwi-a-tz--  50.00g data              21.96                                 
  vm-101-disk-0                      satassd Vwi-a-tz--  50.00g data              23.85                                 
  vm-104-disk-0                      satassd Vwi-a-tz--  50.00g data              44.77                                 
  vm-106-disk-0                      satassd Vwi-a-tz--  50.00g data              20.60                                 
  vm-113-disk-0                      satassd Vwi-a-tz--  50.00g data              98.47

Looks like there should be plenty of space available, since the whole VG is 1.82T in size. So why can't I extend the satassd-data metadata size? (Side note: satassd is a "thin" group if that matters)

Is the satassd VG locked in some way?

Thanks for any insights.

Eric

P.S. Here's the result of "df -h"

Code:

root@pve:~# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      16G    0  16G  0% /dev
tmpfs                    3.2G  110M  3.1G  4% /run
/dev/mapper/pve-root      94G  31G  59G  35% /
tmpfs                      16G  43M  16G  1% /dev/shm
tmpfs                    5.0M    0  5.0M  0% /run/lock
tmpfs                      16G    0  16G  0% /sys/fs/cgroup
/dev/fuse                  30M  28K  30M  1% /etc/pve
//192.168.1.75/vmbackups  5.5T  1.3T  4.2T  24% /mnt/pve/vmbackups
tmpfs                    3.2G    0  3.2G  0% /run/user/0


sundialsvcs 05-18-2021 08:17 PM

The usual solution is to simply add another physical volume to the pool, thereby increasing its capacity. Then, re-size the logical volume's filesystem to take advantage of the new additional space.

FYI: If you're routinely dealing with a situation where "various directories are automatically filled-up with various things that you really don't need to keep forever," the logrotate utility is a remarkably-versatile thing. Although it is customarily used to keep parts of /var/log under control, it can actually be used anywhere. It will automatically compress files after a specified time period and, if you wish, automatically discard those which have exceeded a specified age.

jamison20000e 05-18-2021 08:31 PM

Code:

bleachbit
to some extent

berndbausch 05-19-2021 01:11 AM

You have an LV named data that occupies 1.8TB. Therefore, the VG is full. I guess data is your thin pool, so that you should be able to create more thin LVs.

Actually, it's not quite clear to me what you want. data can't be extended without adding PVs to the VG, that's certain.

GazL 05-19-2021 08:25 AM

As I understand it, thin-pools are made up of two components: data and metadata, each of which are stored in their own specialised type of lv.
e.g.

data
[data_tdata]
[data_tmeta]

Problem seems to be that OP, or a predecessor thereof, didn't make the data_tmeta lv big enough and now wants to increase it, but all remaining space in the VG is allocated to the the data_tdata component of the thin pool. OP has free space in the data component, but non in the vg itself.

The problem (s)he has now is that -- unless things have changed recently -- there's no way to non-destructively shrink the LV holding the data_tdata component in order to free up unused extents to expand data_tmeta with.


1st rule of LVM Club: never allocate it all. Always leave a portion free for unforeseen growth -- Doesn't help OP now of course, but it's a valuable lesson for the future, and the rest of us.


Personally, I'm not a fan of thin-provisioning. While it has its niche uses, unless one is very attentive, and can quickly and easily respond to the requirement of additional storage for the pool, under-provisioning storage is just asking for trouble somewhere down the line.

sundialsvcs 05-19-2021 10:14 AM

Hmmm ... I had failed to notice that. Yes, "thin pools" are definitely an edge-case and I have never liked the idea of them. I don't know if the OP ever said why a decision might have been made to use them in his shop.


All times are GMT -5. The time now is 05:52 PM.