Linux File Optimization
I'm aware that fragmentation is essentially not an issue on Linux.
However, we've recently come into issues where disk performance degrades over time on both windows and Linux. The question that most windows users pose, "why does windows get slower over time?" is the key. The answer for the disk slowness is simple. Files are optimized when the machine is built and as they get moved by defrag (or by the Linux file system automatically), and patching then they are no longer optimized. The only true way to keep them optimized is to have a process watch the disk access and reorganize files on disk accordingly. Hence, my question, does anyone know of a product or process on Linux that performs the same task as Diskeeper's I-FAAST technology? |
You can get some information in this thread:
http://www.linuxquestions.org/questi...-linux-331862/ edit: and this: http://www.linuxquestions.org/questi...=fragmentation |
Quote:
Quote:
|
So far (Linux-only system), I have had good results with a time-honored method: Big hard drives. If you have a lot more space than you need, there is less tendency to get any fragmentation.
|
nothing to do with fragmentation
Just to be clear, this post is not about fragmentation, the first line in my original post specifically states that Linux 'essentially' doesn't suffer from fragmentation.
Regardless of any other factor, both Linux and windows machines are in an "optimized" state when they are built because the files closely packed on the drive. Later, regardless of disk size, as the system is patched those files will get moved out further and further out onto the drive. The final product is that the heads are moving greater distances and more often seeking the same files that used to be down near the center when the system was built. Even if you never install or run anything other than patches, the system will still become unoptimized. Again, this has nothing to do with fragmentation, this is file optimization. Does anyone know of a way to do this on Linux? |
You may not know it, but it has everything to do with fragmentation. Maybe this will help:
http://geekblog.oneandoneis2.org/ind..._defragmenting The only reason the HDD heads would need to move greater and greater distances is due to fragmentation. However, on a linux system, this is negligible unless you keep the HDD full to near capacity. For XFS you can use xfs_fsr to do exactly what you want, not really a defragment but a reorganization. And XFS is a very high performance filesystem, so this may be the best option. If you are using ext2/3 there exists 'defrag' with is a rather old defrag program ... not sure whether to trust it, and it just does a regular defrag, not a reorganization like you want. |
Quote:
Quote:
Quote:
|
I'm not convinced that you know what you are talking about and recommend that you study the issue further. Remember that HDD have many platters and I don't see why if you move a file to the end of one platter it would cause the drive heads to move a longer distance ... that depends on where the heads were previously.
|
Quote:
We've already seen this have an effect on over 60K defragmented windows machines. However, our Linux machines continue to suffer from it. |
If you actually look at the file organization on a fresh install of windows, prior to defragmentation, you will find that the system is hugely fragmented. In fact, the first thing you should do with Windows after installation is defrag the drive.
If you actually look at the locations of specific files in a Linux system after installation, you will find them scattered across the entire drive. In fact, one of the ext2/3 strategies to prevent fragmentation is to scatter files across the entire drive. In Windows, some defragmenters do indeed organize the files according to some scheme so that the most commonly used files are near the center, with the intent of causing less head motion. But this is not default behavior for the filesystem on either Linux or Windows. You'll gain the performance without reorganizing the files by using intelligent caching and by using drives that are smart enough to reorder I/O requests so as to minimize the time necessary to access all the data in the I/O queue. Both Windows and Linux use caching, and all SCSI, SAS, and (I think) SATA drives can reorganize I/O. Older IDE drives do not reorganize I/O. Also, journaling filesystems kind of defeat the purpose of the file reorganization anyway; the heads have to move to write to the journal. Years ago on Windows NT, I used Raxco Perfect Disk for awhile because it reorganized the files in a fashion reminiscent of what is being talked about here. While I found it to be a good defragger, I couldn't ever document significant performance gains over a simpler defragger such as Diskkeeper. At the time, I was using SCSI disks. Since then, as memory has gotten cheap and plentiful, and hence caching has become both more common and more extensive, I am totally unconvinced that the reorganization has any significant benefit. And I am still using SCSI disks. |
Quote:
|
Yes, it is proprietary from them but it's done wonders for us. We had boot times of 6-8 mins (older machines) down to 3ish, and app start times of 30 seconds down to 3-5 seconds. Diskeeper has always done a measure of reorganizaton of directories and files, similar to how xfs_fsr's reorg methods behaves. However, the I-FAAST service stays live and actually watches the system over time and organizes the drive accordingly.
Here's the sales pitch off their site: Quote:
|
A Great Post
Quote:
Quote:
Quote:
Thank you again! |
Ansuer,
If your "target system" a multi-user system - i.e. has a number of users logged in and working at the same time - or is it essentially a workstation, however large it may or may not be? |
I question whether you get the performance boost from the I-FAAST scheme particularly, or just from defragging the drives.
Without doubt, defragging a Windows drive can have a very pronounced effect on performance - very pronounced indeed. But I *think* that most of this is attributable to the fragmentation as opposed to the scattering. After all, when a file is fragmented, the heads have to move all over the place to pick up the entire file; when the files are scattered, the heads just have to move to where the file is. The test would be to take a badly fragmented drive and image it. Then defragment one image using the I-FAAST technology and defrag another image using some other defragger. Place the drives in identical machines, and run them. If I-FAAST is helping substantially, it'll show. Also, both Linux systems and Windows systems accumulate digital debris and detritus over time. As directories get larger, Linux will take longer to both read and write files in that directory. In some circumstances, this can result in a substantial performance hit, and almost always is a result of some maintenance that was not performed (commonly due to a failure to recognize that it needed to be performed). In Windows, you defrag, you clean the registry, you turn off all the shovelware that has been placed in the system startup. In Linux, you periodically scan your directories looking for ones that are getting huge and identifying why they are huge, and you periodically check for orphaned packages that are no longer needed. You periodically delete old log files that are archived but no longer needed for any purpose. You also look for log files that keep growing indefinitely...those eventually WILL fragment your drive regardless of anything else. |
All times are GMT -5. The time now is 06:24 AM. |