LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 06-14-2012, 12:08 PM   #1
monkeylove
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Rep: Reputation: Disabled
Question Linux systems & defragmenting


From what I read, Linux systems do defragmenting on the fly, i.e., it tries to keep files together. There is a cost, however, in terms of performance.

For systems like MS Windows, however, files are written right away, so there is no cost performance, but fragmentation may take place.
 
Old 06-14-2012, 05:43 PM   #2
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member Response

Hi,
Quote:
Originally Posted by monkeylove View Post
From what I read, Linux systems do defragmenting on the fly, i.e., it tries to keep files together. There is a cost, however, in terms of performance.

For systems like MS Windows, however, files are written right away, so there is no cost performance, but fragmentation may take place.
Linux filesystem will out perform a FAT16,Fat32 or NTFS. Pick one then do some benchmarking. Sure this will depend on the scheduler in use for the Kernel to get optimum performance. But your statements are not even close for either a MS filesystem or a Linux file system. Please look at the following;

Quote:
Understanding UNIX/Linux file system:
Part I <- Understanding Linux filesystems
Part II <- Understanding Linux superblock
Part III <- An example of Surviving a Linux Filesystem Failures
Part IV <- Understanding filesystem Inodes
Part V <- Understanding filesystem directories
Part VI <- Understanding UNIX/Linux symbolic (soft) and hard links
Part VII <- Why isn’t it possible to create hard links across file system boundaries?
Comparison of file systems will help you.

Quote:
inode pointer structure - 'The inode pointer structure is a structure adopted by the inode of a file in the Unix File System (UFS) or other related file systems to list the addresses of a file's data blocks' + Ext3 for large file systems + A Basic UNIX Tutorial + A Fast File System for UNIX + Computer file systems
The above links and others can be found at 'Slackware-Links'. More than just Slackware® links!

HTH!
 
3 members found this post helpful.
Old 06-15-2012, 02:36 PM   #3
monkeylove
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
You might be right:

http://recoverymonkey.org/2010/03/29...pact-and-more/

although some might argue that results are mixed, and that as file systems are improved, performance comparisons might see-saw. The type of machines used, the types of files involved, etc., might also be considered.

In my case, the NAS that I access using Win 7 uses XFS, and file access is slow, although this might have to do with the CPU and memory of the NAS.

In general, one will probably want to look at this in terms of cross-platform needs:

http://insidethebrackets.blogspot.co...ilesystem.html

Still, fragmentation takes place for Windows, although I've experienced little slow down for Win 7, probably because the defrag takes place in the background.
 
Old 06-15-2012, 03:02 PM   #4
frieza
Senior Member
 
Registered: Feb 2002
Location: harvard, il
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
Posts: 3,233

Rep: Reputation: 406Reputation: 406Reputation: 406Reputation: 406Reputation: 406
Quote:
Originally Posted by monkeylove View Post
From what I read, Linux systems do defragmenting on the fly, i.e., it tries to keep files together. There is a cost, however, in terms of performance.

For systems like MS Windows, however, files are written right away, so there is no cost performance, but fragmentation may take place.
first of all, you are misinformed, linux systems as best as i understand do NOT do 'defragmentation' 'on the fly', they simply allocate files on the hard drive in a method that minimizes fragmentation, unless the drive is nearly full.


second, as windows drives get more and more fragmentation, the overhead of accessing the fragmented drive DRASTICALLY increases the cost performance and slows the machine down to the speed of a snail, hence the need to defragment periodically.

so which is more efficient?

know your facts before you spout out what amounts to little more than pro-windows propaganda.
 
1 members found this post helpful.
Old 06-16-2012, 06:57 AM   #5
dwmolyneux
Member
 
Registered: Feb 2012
Location: United States of America
Distribution: "First Time Gentoo user",Debian, Fedora, LinuxMint
Posts: 124

Rep: Reputation: Disabled
Quote:
Originally Posted by frieza View Post
first of all, you are misinformed, linux systems as best as i understand do NOT do 'defragmentation' 'on the fly', they simply allocate files on the hard drive in a method that minimizes fragmentation, unless the drive is nearly full.


second, as windows drives get more and more fragmentation, the overhead of accessing the fragmented drive DRASTICALLY increases the cost performance and slows the machine down to the speed of a snail, hence the need to defragment periodically.

so which is more efficient?

know your facts before you spout out what amounts to little more than pro-windows propaganda.
I have to agree 100%

Edit: Sorry I should have added this as well for the OP. I could see if it were addressed as a question but as to how it was layed out and reads...This comes across as such, that your are possibly just trying to start something.
That is why I'm in agreement with frieza

Last edited by dwmolyneux; 06-16-2012 at 07:04 AM.
 
1 members found this post helpful.
Old 06-16-2012, 02:55 PM   #6
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,980

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
Linux systems do require fragmentation. Some file systems need it less and some users never need to worry about it. At one time, everyone used to do a tape backup each night and then return the data each day. Both MS and Unix systems started out each day with all files contiguous since the backup by default put each file together and put it on the tape.

To argue which is faster is pointless. A Server 2008 system running some database of some type of files can never be tested against any almost similar linux speeds for the most part. You have to measure the entire scope of the operation. In a real world they may have similar speeds. In some use, one OS may prove to be faster for that task.
 
1 members found this post helpful.
Old 12-07-2013, 12:30 PM   #7
monkeylove
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
Sorry, I am not a computer expert. I just found out that I got my information from this page:

http://www.howtogeek.com/115229/
 
Old 12-07-2013, 01:27 PM   #8
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
Quote:
Originally Posted by monkeylove View Post
You might be right:

http://recoverymonkey.org/2010/03/29...pact-and-more/

although some might argue that results are mixed, and that as file systems are improved, performance comparisons might see-saw. The type of machines used, the types of files involved, etc., might also be considered.

In my case, the NAS that I access using Win 7 uses XFS, and file access is slow, although this might have to do with the CPU and memory of the NAS.

In general, one will probably want to look at this in terms of cross-platform needs:

http://insidethebrackets.blogspot.co...ilesystem.html

Still, fragmentation takes place for Windows, although I've experienced little slow down for Win 7, probably because the defrag takes place in the background.
That is because XFS is not native to MS Windows 7. It is little difference then a Linux system accessing SAMBA (FAT/NTFS systems) they are not native, thus there is extra processing that much occur. This will always slow things down.

you are always best served by accessing files in a native file system type. As pointed out the FAT/NTFS file systems will continue to become more and more fragmented thus drastically reducing their read/write performance. This is the nature of the inefficiencies in their file system management from the onset.

If you do not defragment FAT/NTFS systems OFTEN you will watch the file system drag the performance down as well as watching the OS eat itself causing more and more issues via the registry system. This is the #1 reason why the MS world of OSs require so much more hardware then the Linux world. It is also why no MS OS is designed to last longer then 3 years before it "should" be reformatted and installed again from scratch. Linux is designed to last 10+ years in some cases, just look at RHEL and SuSe Server class OSs. RHEL is built to last no less then 7 years without extended service contracts and up to 12 years with extended service. Try that with the MS world. in other words try running win2k3 today on hardware from 7 years ago and have it keep up with the demands of modern hardware and windows most current server. sorry, but it just can not do that. RHEL can.

This is part of the difference between MS/Linux, but still it boils down to accessing in native file system format. you are asking a twig to talk to a rock instead of having it talk to a tree that it is attached to. things just get lost in translation.
 
Old 12-08-2013, 05:06 AM   #9
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
Quote:
Originally Posted by monkeylove View Post
Sorry, I am not a computer expert. I just found out that I got my information from this page:

http://www.howtogeek.com/115229/
I'm sorry, but articles like that (the 'tell them something, but don't bother about the details, because that'll scare the newbies' ones) I find a bit problematic. Yes, they are simple and easy to read, but they often cause as much difficulty as they cure.

Quote:
...ext4 being the file system used by Ubuntu and most other current Linux distributions...
If they had said 'ext4 being the default file system..., you couldn't really argue, but, as it stands, that's not actually correct. (And, I'm giving them an unnecessarily easy time on removable devices, and should probably insist on something like 'the installation filesystem' to cover the case of all sorts of filesystems being used on thumb drives, and DVDs and even floppies, if anyone remembers those).

Quote:
...allocates files in a more intelligent way...
You can argue that one, too; it is more intelligent in most use cases, but I can just about come up with corner cases in which the 'plain and simple' allocation strategy works as well, or better. No question that the majority are better served by the slightly more involved allocation strategy, but you do worry slightly about the over-simplification.

Quote:
Linux file systems scatter different files all over the disk
all over the disk wouldn't be that intelligent, given the stroke time limitations on hard disks (and what about ssds?), and it should have been 'all over the partition' for cases in which there is more than one partition. That's just a bit daft, but they probably didn't want to explain partitions...

Note also that it does not say that the article is only about ext4, just that 'ext4 is the file system used by Ubuntu'. So, you'd expect that these comments would be true of all applicable filesystems, and they're just not. BTRFS (which works quite differently from some older filesystems) has a background defrag util, although it is probably, currently, still more 'experimental' than anything else. I don't know in enough detail, but I'd doubt that the comments are true of the likes of Nillfs2 and F2fs (although you'd probably only be using those on flash media, but the fact is that they are file systems with worthwhile use cases, even if slightly specialised ones).

I understand how difficult it is to write this type of article and really get it right, but you wonder at times whether anybody actually bothers. I mean, if they had only put in a 'ext4 is the most commonly used, so that's the filesystem this article is going to concentrate on' kind of comment, rather than just 'ext4 is the one used by Ubuntu', it would be much closer to correct.
 
Old 12-10-2013, 08:34 AM   #10
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
Two things:

(1) "Fragmentation" is rarely an issue anymore, because disk drives are both vast and cheap.

(2) All modern-day file systems are designed to minimize fragmentation and to run for years without "optimization." (This specifically includes Windows' NTFS system.) Legacy/Compatibility systems like FAT were, if you remember, designed for floppy-disk drives, when they were "a damn sight better than CP/M."

(Even FAT-based implementations got a whole lot better, as soon as it could be assumed that the machine would have a decent amount of memory to work with ... which most-emphatically was not the case when these early systems were designed. Better-algorithms take more-memory, and this is what you did not have in the beginning.

Last edited by sundialsvcs; 12-10-2013 at 08:35 AM.
 
Old 12-10-2013, 08:59 AM   #11
schneidz
LQ Guru
 
Registered: May 2005
Location: boston, usa
Distribution: fedora-35
Posts: 5,313

Rep: Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918
Quote:
Originally Posted by monkeylove View Post
...
Still, fragmentation takes place for Windows, although I've experienced little slow down for Win 7, probably because the defrag takes place in the background.
fragmentation is less likely on newer pc's since hardrives are 500 gb now and there is usually continuous space available to write (contrast that with 10 years ago the norm would be 5 gb).
Quote:
Originally Posted by monkeylove View Post
From what I read, Linux systems do defragmenting on the fly, i.e., it tries to keep files together. There is a cost, however, in terms of performance.

For systems like MS Windows, however, files are written right away, so there is no cost performance, but fragmentation may take place.
i think windows rot (slow disk access) is because of how fat32/ ntfs doesnt pad in empty space on newly created files.
e.g.: if you create file-1.txt and write 'hello' in it.
then you create file-2.txt and write 'chun-li' in it, the filesystem will put file-1.txt and file-2.txt in succession.
so 3 months later you open file-1.txt and write 'world', the filesystem will have the first half of file-1.txt then file-2.txt then the second half of file-1.txt (it will take more time to find all the fragments of file-1.txt).
multiply that by thousands of files after many months -- defragging the hard drive helps this alot.
most other filesystems like ext2 assume there will be changes in files and pads in some extra kilobytes between files. this is slightly wasteful but the benefit of less fragmentation results in faster disk file access.

Last edited by schneidz; 12-10-2013 at 09:58 AM.
 
Old 09-23-2015, 07:28 AM   #12
monkeylove
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by frieza View Post
first of all, you are misinformed, linux systems as best as i understand do NOT do 'defragmentation' 'on the fly', they simply allocate files on the hard drive in a method that minimizes fragmentation, unless the drive is nearly full.


second, as windows drives get more and more fragmentation, the overhead of accessing the fragmented drive DRASTICALLY increases the cost performance and slows the machine down to the speed of a snail, hence the need to defragment periodically.

so which is more efficient?

know your facts before you spout out what amounts to little more than pro-windows propaganda.
Sorry, that's what I mean: scattering the files to avoid fragmentation.

---------- Post added 09-23-15 at 08:29 PM ----------

Quote:
Originally Posted by salasi View Post
I'm sorry, but articles like that (the 'tell them something, but don't bother about the details, because that'll scare the newbies' ones) I find a bit problematic. Yes, they are simple and easy to read, but they often cause as much difficulty as they cure.



If they had said 'ext4 being the default file system..., you couldn't really argue, but, as it stands, that's not actually correct. (And, I'm giving them an unnecessarily easy time on removable devices, and should probably insist on something like 'the installation filesystem' to cover the case of all sorts of filesystems being used on thumb drives, and DVDs and even floppies, if anyone remembers those).



You can argue that one, too; it is more intelligent in most use cases, but I can just about come up with corner cases in which the 'plain and simple' allocation strategy works as well, or better. No question that the majority are better served by the slightly more involved allocation strategy, but you do worry slightly about the over-simplification.



all over the disk wouldn't be that intelligent, given the stroke time limitations on hard disks (and what about ssds?), and it should have been 'all over the partition' for cases in which there is more than one partition. That's just a bit daft, but they probably didn't want to explain partitions...

Note also that it does not say that the article is only about ext4, just that 'ext4 is the file system used by Ubuntu'. So, you'd expect that these comments would be true of all applicable filesystems, and they're just not. BTRFS (which works quite differently from some older filesystems) has a background defrag util, although it is probably, currently, still more 'experimental' than anything else. I don't know in enough detail, but I'd doubt that the comments are true of the likes of Nillfs2 and F2fs (although you'd probably only be using those on flash media, but the fact is that they are file systems with worthwhile use cases, even if slightly specialised ones).

I understand how difficult it is to write this type of article and really get it right, but you wonder at times whether anybody actually bothers. I mean, if they had only put in a 'ext4 is the most commonly used, so that's the filesystem this article is going to concentrate on' kind of comment, rather than just 'ext4 is the one used by Ubuntu', it would be much closer to correct.
Sorry, I was referring to some of the comments below the article.
 
Old 09-23-2015, 07:32 AM   #13
monkeylove
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by sundialsvcs View Post
Two things:

(1) "Fragmentation" is rarely an issue anymore, because disk drives are both vast and cheap.

(2) All modern-day file systems are designed to minimize fragmentation and to run for years without "optimization." (This specifically includes Windows' NTFS system.) Legacy/Compatibility systems like FAT were, if you remember, designed for floppy-disk drives, when they were "a damn sight better than CP/M."

(Even FAT-based implementations got a whole lot better, as soon as it could be assumed that the machine would have a decent amount of memory to work with ... which most-emphatically was not the case when these early systems were designed. Better-algorithms take more-memory, and this is what you did not have in the beginning.
It's an issue if there is more than enough data involved.

Also, systems may slow down depending on the type of use, e.g., large numbers of small and large files are added, deleted, and moved daily. At least, that's what I gathered from some NTFS desktops compared to others.
 
Old 09-24-2015, 07:02 AM   #14
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
So, why did a three-year old thread just get resuscitated?
 
Old 09-24-2015, 10:15 PM   #15
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,980

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
This is something we don't see often. A three year old thread was resurrected by the OP, monkeylove it seems.

monkeylove, your link is a generic web page. Although it's point is that you don't need to defragment in linux, that really only applies to the general user and for common filesystems. There can be instances where a linux user may wish to investigate their filesystem and make adjustments. No filesystem is fool proof.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
I want to make image of my installed Linux(CentOS) & install it to other systems euro007 Linux - Software 3 12-29-2011 02:14 AM
btrfs filesystem - problems with btrfsctl when shrinking & defragmenting fs Pearlseattle Linux - Software 1 05-06-2009 12:14 PM
LXer: How To Manage Unix & Linux Systems Using Webmin LXer Syndicated Linux News 0 12-20-2008 08:30 AM
Defragmenting in linux? wiggywag Mandriva 7 11-25-2003 12:22 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 09:36 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration