LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
Search this Thread
Old 09-29-2009, 06:54 AM   #1
Ansuer
LQ Newbie
 
Registered: Jun 2006
Location: NC
Distribution: Debian/Ubuntu/Gentoo
Posts: 22

Rep: Reputation: 19
Linux File Optimization


I'm aware that fragmentation is essentially not an issue on Linux.

However, we've recently come into issues where disk performance degrades over time on both windows and Linux. The question that most windows users pose, "why does windows get slower over time?" is the key. The answer for the disk slowness is simple. Files are optimized when the machine is built and as they get moved by defrag (or by the Linux file system automatically), and patching then they are no longer optimized.

The only true way to keep them optimized is to have a process watch the disk access and reorganize files on disk accordingly.

Hence, my question, does anyone know of a product or process on Linux that performs the same task as Diskeeper's I-FAAST technology?
 
Old 09-29-2009, 07:02 AM   #2
sycamorex
LQ Veteran
 
Registered: Nov 2005
Location: London
Distribution: Slackware64-current
Posts: 5,563
Blog Entries: 1

Rep: Reputation: 1024Reputation: 1024Reputation: 1024Reputation: 1024Reputation: 1024Reputation: 1024Reputation: 1024Reputation: 1024
You can get some information in this thread:
http://www.linuxquestions.org/questi...-linux-331862/

edit:
and this:
http://www.linuxquestions.org/questi...=fragmentation

Last edited by sycamorex; 09-29-2009 at 07:04 AM.
 
Old 09-29-2009, 07:31 AM   #3
johnsfine
Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,076

Rep: Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110
Quote:
Originally Posted by Ansuer View Post
"why does windows get slower over time?"
Windows gets slower over time because of all the forms of auto start processes installed by every application that was designed as if it should be the most important thing on your computer as well as spyware, viruses and other malware. The accumulation of crud in the registry also slows it down, both directly and through whatever malware is kept alive by that crud.

Quote:
The answer for the disk slowness is simple.
In Windows or Linux, simply having more files and simply using a larger fraction of the hard drive makes the file system slower. The fragmentation is a secondary factor even in Windows. A 50% full fragmented filesystem will perform a lot better than a 90% full defragmented filesystem (assuming other factors, such as average file size, are the same).
 
Old 09-29-2009, 07:45 AM   #4
pixellany
LQ Veteran
 
Registered: Nov 2005
Location: Annapolis, MD
Distribution: Arch/XFCE
Posts: 17,802

Rep: Reputation: 728Reputation: 728Reputation: 728Reputation: 728Reputation: 728Reputation: 728Reputation: 728
So far (Linux-only system), I have had good results with a time-honored method: Big hard drives. If you have a lot more space than you need, there is less tendency to get any fragmentation.
 
Old 09-29-2009, 12:39 PM   #5
Ansuer
LQ Newbie
 
Registered: Jun 2006
Location: NC
Distribution: Debian/Ubuntu/Gentoo
Posts: 22

Original Poster
Rep: Reputation: 19
nothing to do with fragmentation

Just to be clear, this post is not about fragmentation, the first line in my original post specifically states that Linux 'essentially' doesn't suffer from fragmentation.

Regardless of any other factor, both Linux and windows machines are in an "optimized" state when they are built because the files closely packed on the drive. Later, regardless of disk size, as the system is patched those files will get moved out further and further out onto the drive. The final product is that the heads are moving greater distances and more often seeking the same files that used to be down near the center when the system was built. Even if you never install or run anything other than patches, the system will still become unoptimized.

Again, this has nothing to do with fragmentation, this is file optimization. Does anyone know of a way to do this on Linux?
 
Old 09-29-2009, 01:44 PM   #6
H_TeXMeX_H
Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269
You may not know it, but it has everything to do with fragmentation. Maybe this will help:
http://geekblog.oneandoneis2.org/ind..._defragmenting

The only reason the HDD heads would need to move greater and greater distances is due to fragmentation.

However, on a linux system, this is negligible unless you keep the HDD full to near capacity.

For XFS you can use xfs_fsr to do exactly what you want, not really a defragment but a reorganization. And XFS is a very high performance filesystem, so this may be the best option.

If you are using ext2/3 there exists 'defrag' with is a rather old defrag program ... not sure whether to trust it, and it just does a regular defrag, not a reorganization like you want.
 
0 members found this post helpful.
Old 09-29-2009, 07:21 PM   #7
Ansuer
LQ Newbie
 
Registered: Jun 2006
Location: NC
Distribution: Debian/Ubuntu/Gentoo
Posts: 22

Original Poster
Rep: Reputation: 19
Quote:
Originally Posted by H_TeXMeX_H View Post
You may not know it, but it has everything to do with fragmentation. Maybe this will help:
http://geekblog.oneandoneis2.org/ind..._defragmenting
My question has nothing to do with defragmentation. I've engineering Linux for over a decade, I'm aware that said file systems write to a contiguous block. Hence, the first line of this post.

Quote:
Originally Posted by H_TeXMeX_H View Post
The only reason the HDD heads would need to move greater and greater distances is due to fragmentation.
This is incorrect. If two non-fragmented files are at the base of a drive, one right after another, then the heads hardly have to move when they are read sequentially. This is optimal and fast. If you move one of those files to the end of the drive, and try to read them one after the other, now the heads had to move a large distance. Best example is OS related files. When the machine is built they are together on the drive, making them optimal. As the machine gets patched over time they get spread out everywhere. The I-FAAST system watches for the most often accessed files and clusters them together near the center (beginning) of the disk.

Quote:
Originally Posted by H_TeXMeX_H View Post
For XFS you can use xfs_fsr to do exactly what you want, not really a defragment but a reorganization. And XFS is a very high performance filesystem, so this may be the best option.
XFS actually does suffer from fragmentation and as far as I know xfs_fsr just keeps the disk defragmented. It does re-organize the files to a certain extent but by no means makes them optimized.
 
Old 09-30-2009, 05:17 AM   #8
H_TeXMeX_H
Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269
I'm not convinced that you know what you are talking about and recommend that you study the issue further. Remember that HDD have many platters and I don't see why if you move a file to the end of one platter it would cause the drive heads to move a longer distance ... that depends on where the heads were previously.
 
0 members found this post helpful.
Old 09-30-2009, 12:30 PM   #9
Ansuer
LQ Newbie
 
Registered: Jun 2006
Location: NC
Distribution: Debian/Ubuntu/Gentoo
Posts: 22

Original Poster
Rep: Reputation: 19
Quote:
Originally Posted by H_TeXMeX_H View Post
I'm not convinced that you know what you are talking about and recommend that you study the issue further.
No offense, but did you read my first post? Do you even know what I-FAAST is and what it does?

We've already seen this have an effect on over 60K defragmented windows machines. However, our Linux machines continue to suffer from it.
 
Old 09-30-2009, 12:32 PM   #10
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 114Reputation: 114
If you actually look at the file organization on a fresh install of windows, prior to defragmentation, you will find that the system is hugely fragmented. In fact, the first thing you should do with Windows after installation is defrag the drive.

If you actually look at the locations of specific files in a Linux system after installation, you will find them scattered across the entire drive. In fact, one of the ext2/3 strategies to prevent fragmentation is to scatter files across the entire drive.

In Windows, some defragmenters do indeed organize the files according to some scheme so that the most commonly used files are near the center, with the intent of causing less head motion. But this is not default behavior for the filesystem on either Linux or Windows.

You'll gain the performance without reorganizing the files by using intelligent caching and by using drives that are smart enough to reorder I/O requests so as to minimize the time necessary to access all the data in the I/O queue. Both Windows and Linux use caching, and all SCSI, SAS, and (I think) SATA drives can reorganize I/O. Older IDE drives do not reorganize I/O.

Also, journaling filesystems kind of defeat the purpose of the file reorganization anyway; the heads have to move to write to the journal.

Years ago on Windows NT, I used Raxco Perfect Disk for awhile because it reorganized the files in a fashion reminiscent of what is being talked about here. While I found it to be a good defragger, I couldn't ever document significant performance gains over a simpler defragger such as Diskkeeper. At the time, I was using SCSI disks. Since then, as memory has gotten cheap and plentiful, and hence caching has become both more common and more extensive, I am totally unconvinced that the reorganization has any significant benefit. And I am still using SCSI disks.

Last edited by jiml8; 09-30-2009 at 12:49 PM.
 
1 members found this post helpful.
Old 09-30-2009, 12:48 PM   #11
H_TeXMeX_H
Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269
Quote:
Originally Posted by Ansuer View Post
No offense, but did you read my first post? Do you even know what I-FAAST is and what it does?
Looks like proprietary BS to me ... they don't bother explaining exactly how it works. Do you know ?
 
0 members found this post helpful.
Old 09-30-2009, 01:16 PM   #12
Ansuer
LQ Newbie
 
Registered: Jun 2006
Location: NC
Distribution: Debian/Ubuntu/Gentoo
Posts: 22

Original Poster
Rep: Reputation: 19
Yes, it is proprietary from them but it's done wonders for us. We had boot times of 6-8 mins (older machines) down to 3ish, and app start times of 30 seconds down to 3-5 seconds. Diskeeper has always done a measure of reorganizaton of directories and files, similar to how xfs_fsr's reorg methods behaves. However, the I-FAAST service stays live and actually watches the system over time and organizes the drive accordingly.

Here's the sales pitch off their site:
Quote:
Utilizing a specially formulated technology, I-FAAST closely monitors file usage and organizes the most commonly accessed files for the fastest possible access
I'd love an open source solution all around but this is the first technology I've found that actually profiles the disk access and takes action. Hoping there's something else.
 
Old 09-30-2009, 02:33 PM   #13
Ansuer
LQ Newbie
 
Registered: Jun 2006
Location: NC
Distribution: Debian/Ubuntu/Gentoo
Posts: 22

Original Poster
Rep: Reputation: 19
Thumbs up A Great Post

Quote:
Originally Posted by jiml8 View Post
If you actually look at the locations of specific files in a Linux system after installation, you will find them scattered across the entire drive. In fact, one of the ext2/3 strategies to prevent fragmentation is to scatter files across the entire drive.
@jiml8 - you totally rock, thank you so much for reading my original post. All your points are excellent.

Quote:
Originally Posted by jiml8 View Post
it reorganized the files in a fashion reminiscent of what is being talked about here. While I found it to be a good defragger, I couldn't ever document significant performance gains
On our machines that have been alive for a long time, and gotten a lot of patches and distributions, they really get scattered. It's like file soup. Especially the directory structures, MFT, and OS files. We have seen huge performance gains on those systems with I-FAAST.


Quote:
Originally Posted by jiml8 View Post
Also, journaling filesystems kind of defeat the purpose of the file reorganization anyway; the heads have to move to write to the journal.
That's a huge point for write requests especially, I'll discuss this with some of our other engineers. Our biggest issue was read performance though. If the pagefile or swap is on the OS physical disk will also see this kind of hit, it's always got to go back to those locations.

Thank you again!
 
Old 09-30-2009, 02:50 PM   #14
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Squeeze 2.6.32.9 SMP AMD64
Posts: 3,240

Rep: Reputation: 121Reputation: 121
Ansuer,

If your "target system" a multi-user system - i.e. has a number of users logged in and working at the same time - or is it essentially a workstation, however large it may or may not be?
 
1 members found this post helpful.
Old 09-30-2009, 03:04 PM   #15
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 114Reputation: 114
I question whether you get the performance boost from the I-FAAST scheme particularly, or just from defragging the drives.

Without doubt, defragging a Windows drive can have a very pronounced effect on performance - very pronounced indeed. But I *think* that most of this is attributable to the fragmentation as opposed to the scattering. After all, when a file is fragmented, the heads have to move all over the place to pick up the entire file; when the files are scattered, the heads just have to move to where the file is.

The test would be to take a badly fragmented drive and image it. Then defragment one image using the I-FAAST technology and defrag another image using some other defragger. Place the drives in identical machines, and run them.

If I-FAAST is helping substantially, it'll show.

Also, both Linux systems and Windows systems accumulate digital debris and detritus over time. As directories get larger, Linux will take longer to both read and write files in that directory. In some circumstances, this can result in a substantial performance hit, and almost always is a result of some maintenance that was not performed (commonly due to a failure to recognize that it needed to be performed).

In Windows, you defrag, you clean the registry, you turn off all the shovelware that has been placed in the system startup.

In Linux, you periodically scan your directories looking for ones that are getting huge and identifying why they are huge, and you periodically check for orphaned packages that are no longer needed. You periodically delete old log files that are archived but no longer needed for any purpose. You also look for log files that keep growing indefinitely...those eventually WILL fragment your drive regardless of anything else.

Last edited by jiml8; 09-30-2009 at 03:10 PM.
 
  


Reply

Tags
file, linux, optimization


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Post-Link Optimization for Linux on POWER LXer Syndicated Linux News 0 09-12-2008 07:10 AM
LXer: Linux Performance Optimization LXer Syndicated Linux News 0 07-07-2007 12:01 PM
Linux Game Optimization RpgActioN Linux - Games 8 06-22-2007 07:33 AM
imagemagick + file optimization dajomu Linux - Software 1 07-15-2006 02:49 AM
linux or unix kernel optimization guide, anyone? grupoapunte Linux - General 1 06-21-2005 10:35 PM


All times are GMT -5. The time now is 01:31 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration