Filesystem Question
hey gang,
I've been using linux for a little while now, and occasionally boot back into winblows to play a game or two and thats it.. I remember running through a quick system defrag in winblows to keep performance up after patching World of Warcraft then the thought came to me... Does linux need the occasional defrag to keep things runnning quickly? anyone care to answer this quick question? my current filesystem is the reiserfs |
The FAT filesystem employed by Windows has a rather nasty tendency to become fragmented, and doesn't perform very well at all when it does.
ReiserFS and ext2/3 filesystems are much harder to fragment, although it's not strictly impossible if they get very full. fsck will tell you how fragmented the filesystem is. If you do want to defragment, then the only option is to move all the files off the partition onto a different filesystem and then back onto it again, but I've never heard of anyone actually having to do this. |
Does anyone knows WHY this is? What is the actual difference between fat (and ntfs?) and extX/reiser that makes the latter run smooth without a defrag?
|
Fragmentation
The reason that extX/reiser file systems run smoother without fragmenting is not in the filesystems, it is in the way the operating systems save files. If you enlarge a file in Windows and resave it, Windows will put the additional information someplace in the file system, but not necessarily with the original file. This is what causes fragmentation.
UNIX and GNU/Linux systems keep the file and its additions together in one place when they resave the file. This eliminates (mostly) fragmentation, because the file is contiguous on the disk. |
Isn't that slowing the "enlarge-process" down? Since the kernel will have to move every bits one step to make room for some extra bit, won't it be slower then?
I still see how this works better but I just wonder if there is any difference in performance. Thanks for the info! |
Fragmentation
In general, making sure that file entire file is in one place on the hard drive does take longer, and slows down the response a little. However, unless the file is really large, it does not take that much extra time. What the GNU/Linux operating system does is to just save your entire file in the first available space on the hard drive that can hold all of it, and updates the physical location information.
You get that time back when you open the file again, because the operating system does not have to search all over the hard drive to find all of your file. It can just go to the physical location once and get your file. What slows down the Windows operating system is when you have a large file with little pieces of it scattered all over the hard drive. Windows then has to find and go to each physical location of each piece of the file to get it. This is why fragmentation can slow down your computer's response. |
wikipedia filesysytem
|
Even in Windows, the net-effect of fragmentation is much less of a practical speed-problem than it used to be. Not only have disk drives gotten considerably faster and smarter, but the disk I/O scheduling algorithms in current versions of Windows are considerably improved.
"Defragmentation" is not something that you routinely have to do with any Unix/Linux filesystem, or for that matter with Windows' NTFS. Even though it certainly sold a lot of copies of Norton Utilities, its practical impact on system performance is by now very slight. |
Thanks so much for all the info. It has really been informative.
How much have Microsoft improved the filesystem with NTFS? Does NTFS work better, needing less defragmentation? Is NTFS journal keeping? Are the guys at Microsoft working on an improvment for NTFS or a new filesystem or something like that? I would like to see how much they are going to borrow from extX/reiser in their next filesystem. :) |
All times are GMT -5. The time now is 02:51 PM. |