LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 05-06-2021, 04:00 PM   #1
mn124700
LQ Newbie
 
Registered: May 2021
Posts: 5

Rep: Reputation: Disabled
Question Free-up space on logical volume under LVM?


I have a logical volume on an SSD drive that is filled to capacity. Both the VG and the PV are shown as full. I'm thinking I should run fstrim to free-up unused space on the LV. However, the fstrim command requires that I specify a mount point and I'm not sure what the mount point is for an LV under LVM.

I tried "fstrim -av", but this did not trim the LV in question. I also tried using "fstrim -v /dev/mapper/myLVM", but this just gives an error saying the specified mount point is not a directory.

So, I'm wondering how to free up space.

Thanks for any advice,
Eric
 
Old 05-07-2021, 07:28 AM   #2
HTop
Member
 
Registered: Mar 2019
Posts: 40

Rep: Reputation: Disabled
Are your logical volumes full or filesystems on top of them?
If latter, you should remove some useless files.
 
Old 05-07-2021, 08:51 AM   #3
mn124700
LQ Newbie
 
Registered: May 2021
Posts: 5

Original Poster
Rep: Reputation: Disabled
Thanks for the reply. Both "pvs" and "vgs" show zero for free space. If I do an "lvs" and add up the used space of all the LVs, it's far less than the capacity of the SSD drive. So, I'm thinking there are unused files within the VG that need to be discarded.

Am I correct?

Eric
 
Old 05-07-2021, 09:38 AM   #4
HTop
Member
 
Registered: Mar 2019
Posts: 40

Rep: Reputation: Disabled
Usually volume group and logical volumes are full because it means that all the space is allocated to such resources. This does not mean that your filesystems are full.
You can get disk space statistics using df -h command.
 
Old 05-07-2021, 11:17 AM   #5
mn124700
LQ Newbie
 
Registered: May 2021
Posts: 5

Original Poster
Rep: Reputation: Disabled
I really appreciate your help, as I'm totally confused. The main problem I'm trying to solve is to extend the metadata for a certain VG (satassd), but I get an error...

Code:
root@pve:~# lvextend --poolmetadatasize +1G satassd/data
  Insufficient free space: 256 extents needed, but only 0 available
I thought this was because I was out of space due to files not being discarded from the ssd drive...

Code:
root@pve:~# pvs
  PV             VG      Fmt  Attr PSize    PFree  
  /dev/nvme0n1p3 pve     lvm2 a--  <465.26g <16.00g
  /dev/sdc1      satassd lvm2 a--    <1.82t      0 
root@pve:~# vgs
  VG      #PV #LV #SN Attr   VSize    VFree  
  pve       1  21   0 wz--n- <465.26g <16.00g
  satassd   1  11   0 wz--n-   <1.82t      0
Note that the pve volume group has free space, but the satassd VG (the one I'm having trouble with) does not.

Despite this report of zero free space, lvs does not seem to show enough usage to fill the satassd VG...

Code:
root@pve:~# lvs -a
  LV                                 VG      Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                               pve     twi-aotz-- <338.36g                    56.91  3.73                            
  [data_tdata]                       pve     Twi-ao---- <338.36g                                                           
  [data_tmeta]                       pve     ewi-ao----    3.45g                                                           
  [lvol0_pmspare]                    pve     ewi-------    3.45g                                                           
  root                               pve     -wi-ao----   96.00g                                                           
  snap_vm-102-disk-0_OMV5_c          pve     Vri---tz-k   50.00g data vm-102-disk-0                                        
  snap_vm-102-disk-0_OMV5_initial    pve     Vri---tz-k   50.00g data vm-102-disk-0                                        
  snap_vm-102-disk-0_OMV_3_24_21     pve     Vri---tz-k   50.00g data vm-102-disk-0                                        
  snap_vm-102-disk-0_OMV_d           pve     Vri---tz-k   50.00g data vm-102-disk-0                                        
  snap_vm-107-disk-0_PlayOn          pve     Vri---tz-k   50.00g data vm-107-disk-0                                        
  snap_vm-107-disk-0_PlayOn_1        pve     Vri---tz-k   50.00g data vm-107-disk-0                                        
  snap_vm-107-disk-0_PlayOn_3_24_21  pve     Vri---tz-k   50.00g data vm-107-disk-0                                        
  snap_vm-107-disk-0_PlayOn_b        pve     Vri---tz-k   50.00g data vm-107-disk-0                                        
  snap_vm-109-disk-0_Plex3Deb        pve     Vri---tz-k   50.00g data                                                      
  snap_vm-109-disk-0_Plex3Deb_b      pve     Vri---tz-k   50.00g data                                                      
  snap_vm-109-disk-0_Plex_01         pve     Vri---tz-k   50.00g data vm-109-disk-0                                        
  snap_vm-109-disk-0_Plex_02         pve     Vri---tz-k   50.00g data vm-109-disk-0                                        
  snap_vm-109-disk-0_Plex_03         pve     Vri---tz-k   50.00g data vm-109-disk-0                                        
  snap_vm-109-disk-0_Plex_3_24_21    pve     Vri---tz-k   50.00g data vm-109-disk-0                                        
  snap_vm-109-disk-0_Plex_p          pve     Vri---tz-k   50.00g data vm-109-disk-0                                        
  swap                               pve     -wi-ao----    8.00g                                                           
  vm-102-disk-0                      pve     Vwi-aotz--   50.00g data               12.84                                  
  vm-107-disk-0                      pve     Vwi-aotz--   50.00g data               72.98                                  
  vm-109-disk-0                      pve     Vwi-aotz--   50.00g data               90.50                                  
  base-111-disk-0                    satassd Vri-a-tz-k   50.00g data               5.77                                   
  base-112-disk-0                    satassd Vri-a-tz-k   50.00g data               6.34                                   
  data                               satassd twi-aotz--   <1.82t                    6.01   74.61                           
  [data_tdata]                       satassd Twi-ao----   <1.82t                                                           
  [data_tmeta]                       satassd ewi-ao----  100.00m                                                           
  [lvol0_pmspare]                    satassd ewi-------  100.00m                                                           
  snap_vm-104-disk-0_Anaconda3_21_21 satassd Vri---tz-k   50.00g data vm-104-disk-0                                        
  snap_vm-106-disk-0_Ubuntu3_21_21   satassd Vri---tz-k   50.00g data vm-106-disk-0                                        
  snap_vm-113-disk-0_ZM3_21_21       satassd Vri---tz-k   50.00g data vm-113-disk-0                                        
  vm-100-disk-0                      satassd Vwi-a-tz--   50.00g data               21.96                                  
  vm-101-disk-0                      satassd Vwi-a-tz--   50.00g data               23.85                                  
  vm-104-disk-0                      satassd Vwi-a-tz--   50.00g data               44.77                                  
  vm-106-disk-0                      satassd Vwi-a-tz--   50.00g data               20.60                                  
  vm-113-disk-0                      satassd Vwi-a-tz--   50.00g data               98.47
Looks like there should be plenty of space available, since the whole VG is 1.82T in size. So why can't I extend the satassd-data metadata size? (Side note: satassd is a "thin" group if that matters)

Is the satassd VG locked in some way?

Thanks for any insights.

Eric

P.S. Here's the result of "df -h"

Code:
root@pve:~# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                       16G     0   16G   0% /dev
tmpfs                     3.2G  110M  3.1G   4% /run
/dev/mapper/pve-root       94G   31G   59G  35% /
tmpfs                      16G   43M   16G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                      16G     0   16G   0% /sys/fs/cgroup
/dev/fuse                  30M   28K   30M   1% /etc/pve
//192.168.1.75/vmbackups  5.5T  1.3T  4.2T  24% /mnt/pve/vmbackups
tmpfs                     3.2G     0  3.2G   0% /run/user/0

Last edited by mn124700; 05-07-2021 at 11:33 AM.
 
Old 05-18-2021, 08:17 PM   #6
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 9,140
Blog Entries: 4

Rep: Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227
The usual solution is to simply add another physical volume to the pool, thereby increasing its capacity. Then, re-size the logical volume's filesystem to take advantage of the new additional space.

FYI: If you're routinely dealing with a situation where "various directories are automatically filled-up with various things that you really don't need to keep forever," the logrotate utility is a remarkably-versatile thing. Although it is customarily used to keep parts of /var/log under control, it can actually be used anywhere. It will automatically compress files after a specified time period and, if you wish, automatically discard those which have exceeded a specified age.

Last edited by sundialsvcs; 05-18-2021 at 08:20 PM.
 
Old 05-18-2021, 08:31 PM   #7
jamison20000e
Senior Member
 
Registered: Nov 2005
Location: ...uncanny valley... infinity\1975; (randomly born:) Milwaukee, WI, US( + travel,) Earth( I wish,) END BORDER$!◣◢┌∩┐ Fe26-E,e...
Distribution: any GPL that works well on my cheapest; has been KDE or CLI but open... http://goo.gl/NqgqJx &c ;-)
Posts: 4,393
Blog Entries: 3

Rep: Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436Reputation: 1436
Code:
bleachbit
to some extent
 
Old 05-19-2021, 01:11 AM   #8
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,258

Rep: Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979Reputation: 1979
You have an LV named data that occupies 1.8TB. Therefore, the VG is full. I guess data is your thin pool, so that you should be able to create more thin LVs.

Actually, it's not quite clear to me what you want. data can't be extended without adding PVs to the VG, that's certain.
 
1 members found this post helpful.
Old 05-19-2021, 08:25 AM   #9
GazL
LQ Veteran
 
Registered: May 2008
Posts: 5,923

Rep: Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899Reputation: 3899
As I understand it, thin-pools are made up of two components: data and metadata, each of which are stored in their own specialised type of lv.
e.g.

data
[data_tdata]
[data_tmeta]

Problem seems to be that OP, or a predecessor thereof, didn't make the data_tmeta lv big enough and now wants to increase it, but all remaining space in the VG is allocated to the the data_tdata component of the thin pool. OP has free space in the data component, but non in the vg itself.

The problem (s)he has now is that -- unless things have changed recently -- there's no way to non-destructively shrink the LV holding the data_tdata component in order to free up unused extents to expand data_tmeta with.


1st rule of LVM Club: never allocate it all. Always leave a portion free for unforeseen growth -- Doesn't help OP now of course, but it's a valuable lesson for the future, and the rest of us.


Personally, I'm not a fan of thin-provisioning. While it has its niche uses, unless one is very attentive, and can quickly and easily respond to the requirement of additional storage for the pool, under-provisioning storage is just asking for trouble somewhere down the line.
 
Old 05-19-2021, 10:14 AM   #10
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 9,140
Blog Entries: 4

Rep: Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227Reputation: 3227
Hmmm ... I had failed to notice that. Yes, "thin pools" are definitely an edge-case and I have never liked the idea of them. I don't know if the OP ever said why a decision might have been made to use them in his shop.

Last edited by sundialsvcs; 05-19-2021 at 10:15 AM.
 
  


Reply

Tags
fstrim, full, lvm, ssd


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to create Logical Volume from free space of existing Volume Group? AdultFoundry Linux - Newbie 6 12-19-2016 05:59 AM
LVM Mount Physical Volume/Logical Volume without a working Volume Group mpivintis Linux - Newbie 10 01-11-2014 07:02 AM
Extended LVM Volume group and Logical Volume. But space not usable linuxlover.chaitanya Linux - Server 1 11-19-2012 09:37 AM
LVM free space problem after resizing the logical volume wdorninger Linux - Enterprise 0 11-06-2009 07:00 AM
LVM - Extending logical volume - Insufficient free space deim Linux - Newbie 0 04-07-2007 09:47 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 09:34 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration