LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 06-28-2013, 04:14 AM   #1
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
How is the minimum file system free space decided?


A minimum file system free space of 20% is often advised. When a reason is given it is "performance" or "fragmentation".

Is 20% an optimal figure? Or is it a magic number like having twice the RAM as swap space?

Does a 3 TB file system really need ~600 GB of space without which it will take long to find space for a new file and probably have to fragment it?

What are the actual processes that need to be considered/understood when deciding on an operational free space limit? Or are practical tests required, rather than gedanken experiments?

Last edited by catkin; 06-28-2013 at 04:30 AM. Reason: Missing ?
 
Old 06-28-2013, 04:22 AM   #2
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
it's not so much optimal as logical and / or wise. Log files can easily run away with themselves etc, a database file suddenly grows. it just gives you time and flexibility to deal with these things before they exhaust disk space and cause other issues.

"swap = doubel the ram" is very obsolete though.
 
Old 07-01-2013, 01:16 AM   #3
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Actually, the default is 5% on ext type Filesystems
Quote:
-m reserved-blocks-percentage
Specify the percentage of the filesystem blocks reserved for the super-user. This avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. The default percentage is 5%.
http://linux.die.net/man/8/mkfs.ext4

If you read that page, there are a few other considerations involved as well eg
Quote:
resize_inode
Reserve space so the block group descriptor table may grow in the future. Useful for online resizing using resize2fs. By default mke2fs will attempt to reserve enough space so that the filesystem may grow to 1024 times its initial size. This can be changed using the resize extended option.
 
Old 07-01-2013, 09:48 PM   #4
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578

Original Poster
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Thanks acid_kewpie and chrism01

Can I conclude that "Minimum 20% free space on file systems" has about as much rational basis as "Swap = 2 x RAM"?

Certainly we need some free space:
  1. To allow for usage growth, perhaps sudden.
  2. In the case of ext[234] (all three?), space reserved for root processes (default 5%) and re-sizing.
What about keeping some some free space to avoid excessive fragmentation? That makes sense -- both to ease the work required of the algorithm that identifies free blocks to write to and to reduce the head movements required to write and read a file. But it must depend on the size of the files; if the files were all less than a blocksize there could be no fragmentation; regards this consideration, 20% (less reserved and re-sizing blocks) must be a rough working rule and one that is increasingly wasteful as disk/volume sizes increase.
 
Old 07-01-2013, 11:59 PM   #5
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by catkin View Post
Can I conclude that "Minimum 20% free space on file systems" has about as much rational basis as "Swap = 2 x RAM"?
Swap = 2x RAM really makes no logical sense, I don't know why that was ever a "rule". The swap should simply be enough to handle any RAM overflow given your intended (and "accidental" usage). There's nothing special that happens at swap = 50% or 100% or 200% RAM size that gives that "rule" any credibility.

The 5% reserved space for root is also incredibly antiquated. This default should NEVER have been a fixed percentage, it should have been a fixed size somewhere around 100 MB IMO. It makes no sense to reserve 5% for root on any filesystem. I have a 40 TB RAID on one of my systems, which defaults to 2 TB of reserved space for root. It's completely illogical, root would never need anywhere near that much space on that filesystem if it were to fill up from regular user data.

The 20% free space rule on file systems is very different, however. The main purpose of that rule is to prevent fragmentation, which has very real consequences. I don't know if you've ever used a filesystem up to 95+% usage and kept track of its performance, but once you pass about 85-90% usage the write speed falls off a cliff. Personally, I try to keep my systems below 70%, and if they ever break 90% I make it a priority to re-distribute the data to drop them back down. If I don't, they become almost unusably slow. I'm mostly working with big files though (1-3 GB each), where lack of contiguous space starts to create problems when writing. If you're working with billions of 1 kB files then the rules would change a bit.

Last edited by suicidaleggroll; 07-02-2013 at 12:00 AM.
 
Old 07-02-2013, 02:41 AM   #6
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
Quote:
Originally Posted by suicidaleggroll View Post
Swap = 2x RAM really makes no logical sense, I don't know why that was ever a "rule". The swap should simply be enough to handle any RAM overflow given your intended (and "accidental" usage). There's nothing special that happens at swap = 50% or 100% or 200% RAM size that gives that "rule" any credibility.
When you had a situation where you bought a RAM upgrade or fed your children, there was a good logic in the trade off between financial cost and access speed. Irrelevant now of course.
 
Old 07-02-2013, 09:23 PM   #7
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578

Original Poster
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Thanks suicidaleggroll

Good to have an observation on the rationality of 5%.
Quote:
Originally Posted by suicidaleggroll View Post
I don't know if you've ever used a filesystem up to 95+% usage and kept track of its performance, but once you pass about 85-90% usage the write speed falls off a cliff.
...
I'm mostly working with big files though (1-3 GB each) ...
Thanks for the real-world data.

I've installed Phoronix Test Suite in preparation for doing some tests. Haven't figured out how to make it test a specific file system yet ...
 
Old 08-06-2013, 06:13 AM   #8
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578

Original Poster
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Update

The Phoronix test suite didn't do what I wanted so I wrote a script and ran initial tests on a 15 GB ext4 file system on a SATA 6 HDD and port.

The results do not tell a clear story. Of the ten plots:
  • Three show the conventional result of task times getting longer as file system usage increases but the effect starts around 50 to 60%, not the conventional 75 to 80%. All three tests concern 10,000 directories (creating them, finding them and finding and removing them).
  • Five show task times staying broadly similar throughout the test, with file system usage changing from 30 to 98%. All five show the test task taking longer during roughly 60 to 80% usage and returning to pre-60% times until the end of the test at 98% usage. These five tests concern either 10,000 directories (creating them, finding them and finding and removing them) or a 0.5 GiB file (reading or creating it).
  • The remaining two plots show test times more or less consistent from 30 to 80% file system usage and then reducing by more than 50% from 80 to 98% usage.
A fuller report, including the script's git link and attached tarchive (of test log, report and gnuplots) is in Blue Light's WIKI at (shortlink) http://wiki.bluelightav.org/x/uANIAQ
 
Old 08-06-2013, 08:34 AM   #9
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by catkin View Post
The Phoronix test suite didn't do what I wanted so I wrote a script and ran initial tests on a 15 GB ext4 file system on a SATA 6 HDD and port.

The results do not tell a clear story. Of the ten plots:
  • Three show the conventional result of task times getting longer as file system usage increases but the effect starts around 50 to 60%, not the conventional 75 to 80%. All three tests concern 10,000 directories (creating them, finding them and finding and removing them).
  • Five show task times staying broadly similar throughout the test, with file system usage changing from 30 to 98%. All five show the test task taking longer during roughly 60 to 80% usage and returning to pre-60% times until the end of the test at 98% usage. These five tests concern either 10,000 directories (creating them, finding them and finding and removing them) or a 0.5 GiB file (reading or creating it).
  • The remaining two plots show test times more or less consistent from 30 to 80% file system usage and then reducing by more than 50% from 80 to 98% usage.
A fuller report, including the script's git link and attached tarchive (of test log, report and gnuplots) is in Blue Light's WIKI at (shortlink) http://wiki.bluelightav.org/x/uANIAQ

Interesting. Your first and third bullet tend to match up with what I've seen in practice, not sure what's going on on the second bullet.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Minimum recommended free space for root partition gkout Red Hat 5 12-15-2011 08:52 AM
How do you free space on linux system Shamekaw Linux - Newbie 8 03-24-2011 01:13 PM
System failed due to lack of free disk space une Mandriva 18 09-26-2007 05:56 AM
Minimum file system required for Embedded linux zvivered Linux - Kernel 1 09-18-2007 07:37 PM
Formating free space: WinXP pro and RH9 dualboot with free space on 3rd drive Vermicious Linux - General 2 03-22-2004 05:10 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 12:09 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration