LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   defrag on Linux (https://www.linuxquestions.org/questions/linux-general-1/defrag-on-linux-331862/)

ashley75 06-09-2005 09:19 AM

defrag on Linux
 
Hi all,

how would you defrag on Linux????

by what command????


thanks,

DrOzz 06-09-2005 09:25 AM

you don't have to defrag in linux .. the file system is organized and stored more efficiently than a windows machine. If you really want to then i can
suggest you go to something like www.google.ca/linux and search for
"linux defrag" or something, and grab a utility .. they do exist, but unnessesary..

ashley75 06-09-2005 09:36 AM

1. could you please just explain how Linux organized and stored better than Windows???

2. so there is no command on Linux that we can use to do the defrag except for the third party tool???

thanks

trickykid 06-09-2005 09:44 AM

Quote:

Originally posted by ashley75
1. could you please just explain how Linux organized and stored better than Windows???

2. so there is no command on Linux that we can use to do the defrag except for the third party tool???

thanks

1. Try reading this page, especially under the fragmentation and optimization section.. http://dataexpedition.com/~sbnoble/T...lesystems.html

2. Nope, don't worry about it. Even in Windows, you should only worry about defragging your filesystem if its 20% fragmented or more.. which is rare in most cases. The ony time I ever saw any type of performance increase on any OS after defragging was on Win98 and Win95 systems.. which used a horrible filesystem.

Boow 06-09-2005 11:35 AM

well if your so obsessed with defragging in linux give fsck -a a try. you'll need a livecd to fsck the / partition since its mounted all the time.

trickykid 06-09-2005 11:48 AM

Quote:

Originally posted by Boow
well if your so obsessed with defragging in linux give fsck -a a try. you'll need a livecd to fsck the / partition since its mounted all the time.
That's why you can tell shutdown to perform a fsck on the / filesystem upon reboot while it mounts it as ro.. no need for a live cd.. pah!

Boow 06-10-2005 05:40 PM

forgot about that one

rridler 07-25-2006 09:48 PM

How would you use "fsck -a" on a reboot?

metalx1000 09-12-2006 11:38 AM

Defragging
 
Quote:

Originally Posted by ashley75
1. could you please just explain how Linux organized and stored better than Windows???

2. so there is no command on Linux that we can use to do the defrag except for the third party tool???

thanks

I copied this from http://www.whylinuxisbetter.net/

Now imagine your hard disk is a huge file cabinet, with millions of drawers (thanks to Roberto Di Cosmo for this comparison). Each drawer can only contain a fixed amount of data. Therefore, files that are larger than what such a drawer can contain need to be split up. Some files are so large that they need thousands of drawers. And of course, accessing these files is much easier when the drawers they occupy are close to one another in the file cabinet.

Now imagine you're the owner of this file cabinet, but you don't have time to take care of it, and you want to hire someone to take care of it for you. Two people come for the job, a woman and a man.

* The man has the following strategy : he just empties the drawers when a file is removed, splits up any new file into smaller pieces the size of a drawer, and randomly stuffs each piece into the first available empty drawer. When you mention that this makes it rather difficult to find all the pieces of a particular file, the response is that a dozen boys must be hired every weekend to put the chest back in order.
* The woman has a different technique : she keeps track, on a piece of paper, of contiguous empty drawers. When a new file arrives, she searches this list for a sufficiently long row of empty drawers, and this is where the file is placed. In this way, provided there is enough activity, the file cabinet is always tidy.

Without a doubt, you should hire the woman (you should have known it, women are much better organized :) ). Well, Windows uses the first method ; Linux uses the second one. The more you use Windows, the slower it is to access files ; the more you use Linux, the faster it is. The choice is up to you!

perry 09-13-2007 02:12 PM

Defrag for Linux 2.0
 
Quote:

[Wftl-lug] Linux file system defrag

Lew Pitcher wftl-lug@salmar.com
Sun, 03 Mar 2002 00:33:49 -0500Here's another one for you, boys and girls...

I frequent about 20 or so Linux and Unix newsgroups, and the question of
linux defrag has come up so often in these groups that I've put together
a stock answer that tries to explain what 'fragmentation' is and what
linux does about it. However, although my explanation is detailed in
some respects, it lacks a lot of information in others. I think that I
need to include more information on (1) how linux filesystems (ext2,
ext3, afs, etc.) manage file data block arrangement, in the light of
'file fragmentation', and what performance exposures _are_ present in
the filesystems.

So, I'm asking for suggestions; does anyone here have a good (simple)
explanation of how our filesystems work, and where their weaknesses are?
I'll take anything I can get, and credit you with the information.

FWIW, what follows is my 'stock defrag' answer; enjoy...



In a single-user, single-tasking OS, it's best to keep all blocks for a
file together, because _most_ of the disk accesses over a given period
of time will be against a single file. In this scenario, the read-write
heads of your HD advance sequentially through the hard disk. In the same
sort of system, if your file is fragmented, the read-write heads jump
all over the place, adding seek time to the hard disk access time.

In a multi-user, multi-tasking, multi-threaded OS, many files are being
accessed at any time, and, if left unregulated, the disk read-write
heads would jump all over the place all the time. Even with
'defragmented' files, there would be as much seek-time delay as there
would be with a single-user single-tasking OS and fragmented files.

Fortunately, multi-user, multi-tasking, multi-threaded OSs are usually
built smarter than that. Since file access is multiplexed from the point
of view of the device (multiple file accesses from multiple, unrelated
processes, with no order imposed on the sequence of blocks requested),
the device driver incorporates logic to accomodate the performance hits,
like reordering the requests into something sensible for the device
(i.e elevator algorithm).

In other words, fragmentation is a concern when one (and only one)
process access data from one (and only one) file. When more than one
file is involved, the disk addresses being requested are 'fragmented'
with respect to the sequence that the driver has to service them, and
thus it doesn't matter to the device driver whether or not a file was
fragmented.

To illustrate:

I have two programs executing simultaneously, each reading two different
files.

The files are organized sequentially (unfragmented) on disk...
[1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4]


Program 1 reads file 1, block 1
file 1, block 2
file 2, block 1
file 2, block 2
file 2, block 3
file 1, block 3

Program 2 reads file 3, block 1
file 4, block 1
file 3, block 2
file 4, block 2
file 3, block 3
file 4, block 4

The OS scheduler causes the programs to be scheduled and executed such
that the device driver receives requests
file 3, block 1
file 1, block 1
file 4, block 1
file 1, block 2
file 3, block 2
file 2, block 1
file 4, block 2
file 2, block 2
file 3, block 3
file 2, block 3
file 4, block 4
file 1, block 3

Graphically, this looks like...

[1.1][1.2][1.3][2.1][2.2][2.3][3.1][3.2][3.3][4.1][4.2][4.3][4.4]
}------------------------------>[3.1]
[1.1]<--------------------------'
`----------------------------------------->[4.1]
[1.2]<------------------------------------'
`-------------------------->[3.2]
[2.1]<----------------'
`------------------------------->[4.2]
[2.2]<--------------------------'
`---------------->[3.3]
[2.3]<-----------'
`------------------------------->[4.4]
[1.3]<---------------------------------------------'

As you can see, the accesses are already 'fragmented' and we haven't
even reached the disk yet (up to this point, the access have been
against 'logical' addresses). I have to stress this, the above
situation is _no different_ from an MSDOS single file physical access
against a fragmented file.

So, how do we minimize the effect seen above? If you are MSDOS, you
reorder the blocks on disk to match the (presumed) order in which they
will be requested. On the other hand, if you are Linux, you reorder the
_requests_ into a regular sequence that minimizes disk access using
something like an elevator algorithm. You also read ahead on the drive
(optimizing disk access), buffer most of the file data in memory, and
you only write dirty blocks. In other words, you minimize the effect of
'file fragmentation' as part of the other optimizations you perform
on the _access requests_ before you execute them.
Now, this is not to say that 'file fragmentation' is a good thing. It's
just that 'file fragmentation' doesn't have the *impact* here that it
would have in MSDOS-based systems. The performance difference between a
'file fragmented' Linux file system and a 'file unfragmented' Linux
file system is minimal to none, where the same performance difference
under MSDOS would be huge.

Under the right circumstances, fragmentation is a neutral thing, neither
bad nor good. As to defraging a Linux filesystem (ext2fs), there are
tools available, but (because of the design of the system) these tools
are rarely (if ever) needed or used. That's the impact of designing up
front the multi-processing/multi-tasking multi-user capacity of the OS
into it's facilities, rather than tacking multi-processing/multi-tasking
multi-user support on to an inherently single-processing/single-tasking
single-user system.


== And, I'll add Peter T Breuer's <ptb@lab.it.uc3m.es> comments from
== Message-ID: <lo73t9.bdt.ln@news.it.uc3m.es>, posted on
== Wed, 05 Dec 2001 23:52:52 GMT ...

All "fragmented" drives are better than "unfragmented" ones on a
multiuser multitasking o/s. The point is that the machine is doing
many things simultaneously, so it has to jump arround even if one task
is interested in only one file. Tehre will be up to a hundred tasks
doing i/o simultaneously.

Yes, all disk drivers use elevator algorithms, in any o/s.

But to answer your question, ext2s spreads blocks out evenly through
the disk, using various strategies (well, a single mixed strategy)..
This reduces the average seek time on a single elevator pass.

Peter

== And I'll conclude with Eric P. McCoy's <ctr2sprt@yahoo.com> comments
== from Message-ID: <87wv019qqt.fsf@providence.local>, posted on
== Wed, 05 Dec 2001 23:52:52 GMT ...

"Linux filesystems" is a little misleading. e2fs doesn't generally
have fragmentation issues, for certain definitions of "fragmentation."

The short answer is this: e2fs splits the disks up into block groups,
which are contiguous regions of blocks. The group will contain a
certain number of inodes and (data) blocks. When you create an inode,
Linux probably chooses the group with the largest number of free
(data) blocks. When you write to an inode, Linux will preferentially
allocate (data) blocks in the same group as the inode. When it has
to, it will move on to another (later) group, but will still try to
keep the blocks together.

The end result of this is that data is generally fragmented by only a
few blocks, and almost always travels in the same direction. That's
as opposed to the front-to-back fragmentation which could, and
frequently did, occur in FAT and its derivatives.

The above works great until the file system is nearly full, at which
point free blocks are scattered all across the disk is discontiguous
locations. This is why, on a nearly-full file system (above 95% or
so), e2fs performance will degrade _substantially_.

Other file systems (HPFS in particular) are similar, but call groups
"bands" or "stripes" instead. HPFS is actually worse than e2fs when
nearly full, because it uses pseudo B-trees for the directory
structure which periodically need to be rebalanced. The problem there
is that, when the file system is nearly full, directories may need to
be rebalanced into many different groups, which will obviously cause
enormous slowdowns. e2fs uses a crummy, paleolithic array for its
directories, which results in far worse performance overall, but wins
out in this one narrow case (or can, depending on what's done to the
directory).

Sorry, but most people on this group know better than to mention "file
systems" and "explain" in the same sentence when I am around.

Eric McCoy <ctr2sprt@yahoo.com>



--
Lew Pitcher

Master Codewright and JOAT-in-training
Registered (Slackware) Linux User #112576 (http://counter.li.org/)


So you see, there were only three bowls of soup on the table when Goldielox decide that...

- Perry


knersus 09-28-2007 02:21 PM

Perry said, amongst other things:
Quote:

Now, this is not to say that 'file fragmentation' is a good thing. It's just that 'file fragmentation' doesn't have the *impact* here that it
would have in MSDOS-based systems. The performance difference between a
'file fragmented' Linux file system and a 'file unfragmented' Linux
file system is minimal to none, where the same performance difference
under MSDOS would be huge.
All the answers re Linux and defragging mention ext2/ext3/hpfs/etc - i.e all are Linux/Unix native file systems. What about VFAT file systems under Linux? Is the VFAT fs as clever as ext2fs or is it a straight port of the FAT16/FAT32 MS-DOS fs? Many of us have such drives in our system as that is for the most part the only sane way to share data in a multiboot system with MS products. As an example, I have quite a large USB drive with all my music, photos and videos on, and with all the re-organising and deleting cleaning up going on, it does become very fragmented over time. Now, I can always defrag by booting into Windows and running their defragger. Alternatively I can move all the files to some empty space on another drive and re-format/clear the multimedia drive, but that takes time and I do not always have 160 GB spare capacity handy. So it would be really nice to have a Linux defragger for the odd Windows/MS-DOS drive.

PTrenholme 09-28-2007 03:46 PM

There is a Windows driver available that will let you use an ext2 or ext3 file system from Windows NT or XP. (Since the driver doesn't have a "Microsoft approved" signature, it's unlikely that it could be used with Vista.)

Using the driver, you can use a native Linux file system for your Windows storage, and eliminate the need for an FAT storage.

knersus 09-30-2007 02:15 PM

Windows fs driver
 
Thanks, I will look into the Windows fs driver. There is of course the problem of portability - I would have to install that driver on all of the PCs where that USB disk may be plugged in. I suppose that the best way to do this would be to split the disk into 2 partitions, one being a FAT32 and the other a suitable Linux native partition. The FAT32 partition can then be mounted anywhere and can be used to hold the fs driver installation file for the other partition. The only other drawbacks will then be Vista, and having the necessary permissions on the PC to install the driver.

FXEF 12-19-2007 10:15 PM

Quote:

Originally Posted by Boow (Post 1685799)
well if your so obsessed with defragging in linux give fsck -a a try. you'll need a livecd to fsck the / partition since its mounted all the time.

The fsck utility is a tool for checking the consistency of a file system, not to defrag a file system; fsck is equivalent to the scandisk and chkdsk programs in Windows.

nigelc 12-19-2007 10:48 PM

Hi,
do a "shutdown -F"
and it will tell you how much it is fragmented when the system reboots. It will be less than 5% unless the disk is full. The only file systems that seem to get fragmented are: ms-dos fat 16, fat32, ntfs. And vms. Since the person who wrote most of VMS is now working for Microsoft it probably has the some bugs. When I used to fix hard drives on DEC systems it was common to back the systems to tape, initialize the original drive, restore it all back again.

nigelc

JOKirk 01-28-2009 01:17 PM

Quote:

Originally Posted by knersus (Post 2908546)
Thanks, I will look into the Windows fs driver. There is of course the problem of portability - I would have to install that driver on all of the PCs where that USB disk may be plugged in. I suppose that the best way to do this would be to split the disk into 2 partitions, one being a FAT32 and the other a suitable Linux native partition. The FAT32 partition can then be mounted anywhere and can be used to hold the fs driver installation file for the other partition. The only other drawbacks will then be Vista, and having the necessary permissions on the PC to install the driver.

On searching for Linux defrag practices, I stumbled upon this (very old) thread, and thought I'd jump in with some thoughts.

Sounds like knersus is setting himself up for a huge mess. No way would I mess with a format that requires me to install drivers that I already know will likely be a huge pain on any of my newer workstations (or, in a couple of years, almost all of my workstations). NTFS would be by far the easiest and best solution, unless you want the reduced overhead of FAT. For an external drive, I'd just use FAT (many do by default) because it's widely supported and efficient on space usage -- despite it's potential performance implications. Depends what kind of usage it's going to see. A few thoughts on fragmentation in general, not really pertaining to external drives:

1) Hard drives have multiple read/write heads, therefore it's not simply one read/write head running through a sequential set of data. While so far unreferenced, this is obviously pertinent, in relation to "multi-user, multi-tasking, multi-threaded OS" performance. I do prefer to keep my hard drive thoroughly defragmented -- after every install, if I can, which I'll go into later -- but it's not likely to make a big difference in a lot of situations. You have numerous simultanious read/write operations happening all the time, for all different files. The idea that Linux's ext3fs or the like does a better job than NTFS because it groups data better is, from all current information above, dead wrong. The reality is that neither file system (or, any pertinent file system for us) is really going to see a significant variance in performance in modern environments from a defrag, other than FAT systems.

2) These posts focus solely on single-file fragmentation, but not order of files. If you're talking about one head scanning sequential data for performance reasons (which really is all we can address, since we don't write the algorythms for the hard drives, or add/remove read/write heads to/from hard drives), it's worth noting that most applications install countless small files for a variety of purposes, which may or may not be accessed at any time, especially loading an application or saving a large amount of data. Ignoring file organization on a defragmented drive is no better than leaving it excessively fragmented -- any given head still has to seek over and over again. This is why I try to run defrag on my Windows machines (twice if needed) to get all the files themselves defragmented appropriately, -and- get them clustered together on one portion of the drive. Partitioning can help with this, also, if you feel like planning way ahead (though that's not really good practice).

3) Most of the relatively pertinent comparisons I've found here on LinuxQuestions.org have been in relation to FAT and FAT32, which are very outdated file systems no longer used on out-of-the-box installations of Windows (1998 was the last OS to use a FAT-style file system by default). If we're looking for ways to attack Windows, I think we'll need to dig a tad further, as that drivel serves no one, and is misleading in the question of what to use for a drive that'll be used on both Linux and Windows systems.

My conclusion: Defragmenting can potentially hurt performance for some operations if it ends up grabbing one file from many that you'll need, and locating it somewhere else on the drive entirely, and when you need it your read/write heads aren't already nearby. However, a best-practice would be to keep things thoroughly defragmented so that any given read/write head is most efficient, which will overall decrease the seek time invested by any given head for any given file. If your defrag software also bunches files together (Windows XP defrag does somewhat, although not very well, especially on a single pass) you will further decrease the seek time for most file operations. While following this practice will undoubtedly give worse performance every once in a while, on the whole your data access performance should be better than otherwise, by a small margin, whether you're using NTFS, EXT3FS, or what have you.

Thoughts? References to other postings that may pertain? This thread was my first response from Google, and the only one that really looked useful, so I'm certainly glad to entertain other references.

-John

PTrenholme 01-28-2009 03:30 PM

JOKirk, have you looked at the ext4 file system specifications? Apparently the file system developers had opinions similar to yours, and included fragmentation reduction and seek optimization algorithms in the new stuff. (And, in case you need it, support for multiple exabyte file and disk sizes.)

Quakeboy02 01-28-2009 04:14 PM

In a single-user system, where the user is doing a single task, then perhaps something can be said for periodic defragmentation. However the goal of defragmentation is to separate data by files and to crowd them together as if they won't ever grow; this is not beneficial to a multi-user/multi-tasking system. In Linux, the OS uses what's called an "elevator algorithm" so that the heads are moving in a single direction as long as possible. In the case of scattered data, this will generally improve throughput in a multi-user/multi-task system. If the data is clumped by files, there may be some "surging" noticed by users as they have to wait for their turn in the elevator.

If you have large files and are a single user, then perhaps you would benefit from this defragmentation. However, Linux uses memory and disk caching much better than MS does, so the benefit may only be noticeable on a benchmark.

mrclisdue 01-28-2009 04:27 PM

This is my all time favourite LQ defrag thread:

Code:

http://www.linuxquestions.org/questions/linux-software-2/a-file-system-defragmentation-tool-on-linux-545928/?highlight=fragmentation
cheers,

JOKirk 01-29-2009 08:54 AM

While Ext4 looks great, for a variety of reasons, it rather clearly doesn't eliminate fragmentation -- an impossible concept, from what I can tell, even though the first few sites I found regarding ext4 and fragmentation mentioned it. Quite a few pages erraneously reference Extents as eliminating file fragmentation, but that's not the case at all. What they do (as you probably already know, but I'll explain for the thread anyway) is they help reduce it somewhat by looking for a big enough spot for the file to be placed at it's starting size, and they help reduce the amount of space needed in the file system table for the file as the file is created. Adding in data later will still have an impact on file fragmentation if there's something else coming in after it, or beyond the original size defined (though I'm having trouble finding in-depth explanations for how it handles such a situation). Both good things that help performance quite a bit in file system operations. Really, from what I'm seeing, investing in larger file system blocks is the best way to proactively reduce fragmentation on any file system.

It's still crazy to me that they can store an exabyte on a file system and have a 16 terrabyte maximum file size, with bigger on the way. That's incredible. I still remember my old 286 that had a ... what, 30mb hard drive? Insanity.

Good references:

http://www.ibm.com/developerworks/li...g-filesystems/
http://kernelnewbies.org/Ext4#head-7...756346be4268f4

PTrenholme 01-29-2009 09:59 AM

Yes, it's probably impossible to completely eliminate fragmentation. (Even my wife's file cabinets contain several fragmented folders. In fact, she complains that I increase fragmentation every time I access the files. :))

You must be younger than I: My first system with a hard drive had a 10Mb one, and I was ecstatic! (Just to date myself, the first system I played with was an IBM system, with 512 bytes of RAM, that used "punched paper tape" for mass storage. That was when I was in college half a century ago.)

Borax_Man 01-31-2009 05:07 AM

There are a couple of ways to 'defrag' your system.

First is to MOVE all the files off the partition, them move them back on.

Second is to use e2defrag. It's an old program, and can only handle file systems with a block size of 1K.
There is an patched version of it, which can handle larger block sizes, its called
e2defrag 0.73pjm1

http://rpmseek.com/rpm/defrag_0.73pj...:3341643:0:0:0

There are two caveats. The filsystem must NOT be mounted when you defrag (or at least, mounted read only), and it MUST be ext2, not ext3. So for ext3 filesystems, you MUST convert them to ext2 before using this tool or it can screw up your filesystem.

BACK UP BEFORE HAND! I tested it on an ext3 filesystem and it trashed it, so make sure you convert it to ext2 then when done, switch back to ext3.

But my Linux installation is about 7 years old, and in all honesty, the claims that you don't need to defrag are correct. The fragmentation level fluctuates, but remains low. It's the same now as it was 5 years ago. The only filesystem where fragmentation gets high is one I use for downloads, which is usually near capacity most of the time. But usually, you won't get to the state where defragging is necessary.

nigelc 01-31-2009 08:33 PM

Defragmenting is a concept invented by ms-dos users. The fat12, fat16 then later fat32 are really bad file systems for file fragmentation. Every file always tried to go towards the outside clusters. Even ntfs does it.
It you have file_A taking the first flew blocks on the disk, then file_B will go on the next few blocks etc.
file_C will go next. Then delete file_A, create file_D, but this time make sure it will be bigger than the original file_A. Now we have fragmentation!:)
If the defrag prog does not leave gaps between them, then it will all go bad again.
The system is usually going slow because of other reasons.

JOKirk 02-04-2009 02:33 PM

I'm having a difficult time finding any good explanation of how ext4 (or 3, or whatever other file systems....) do things differently. Now it seems we're discussing more the size of file system blocks than anything. The only way to avoid running out of space for a file placed before more recent ones would be to leave gaps, which may help fragmentation on a file level, but if you install an application, it'll still spread out the files all over -- leaving you with the same performance problem you'd have with a few of the files being fragmented. I have stumbled on references to four kinds of elevator algorithms. Does anyone have any good references for the referenced method of allocating space for a new file? would enlighten me and would be pertinent to the thread, methinks.

JOKirk 02-04-2009 03:07 PM

Quote:

Originally Posted by PTrenholme (Post 3425315)
Yes, it's probably impossible to completely eliminate fragmentation. (Even my wife's file cabinets contain several fragmented folders. In fact, she complains that I increase fragmentation every time I access the files. :))

You must be younger than I: My first system with a hard drive had a 10Mb one, and I was ecstatic! (Just to date myself, the first system I played with was an IBM system, with 512 bytes of RAM, that used "punched paper tape" for mass storage. That was when I was in college half a century ago.)

I was writing batch files and QBASIC code when I was 10 or so, I started messing with some stuff when I was 7-8. I'm 26 now, so that's... 18 years?

Man, time flies when you're having fun. :)

I haven't even -seen- a paper punch system, yet. I've seen pictures, but never in person.

chrism01 02-04-2009 07:23 PM

My 1st on-line/remote system: ICL something (19xx or 29xx I think) accessed via teletype ie paper roll and optional paper tape.
My 1st desktop/local system: Commodore PET
:)

(I didn't own either ;) , just used them )

JOKirk 02-05-2009 10:44 AM

Quote:

Originally Posted by chrism01 (Post 3432492)
My 1st on-line/remote system: ICL something (19xx or 29xx I think) accessed via teletype ie paper roll and optional paper tape.
My 1st desktop/local system: Commodore PET
:)

(I didn't own either ;) , just used them )

Vintage! That's awesome. :)

rweaver 02-05-2009 04:21 PM

Quote:

Originally Posted by ashley75 (Post 1685549)
Hi all,

how would you defrag on Linux????

by what command????


thanks,

It depends on the file system you're using more so than saying "linux" or windows.

Ext2 - You can use e2defrag (umounted only, I think... been a while.)
Ext3 - No tool. No need really.
Ext4 - Tool not available yet, but should be eventually. No need really.
XFS - Has a tool for using on the mounted filesystem while in use.
FAT32/NTFS - Both have defragmentation tools

I'm sure there are others too, but those are the ones I'm most familiar with.

Generally a modern file system shouldn't be experiencing significant fragmentation. If you want to delve into the differences in how files are allocated and accessed you would need significantly more in depth information than I'm going to type up here :P There are tons of documents on the hows and whys available with a simple google search :)


All times are GMT -5. The time now is 04:18 PM.