The following article on wikipedia is a good introduction to ext3fs
http://en.wikipedia.org/wiki/Ext3. It states that defragmentation is not necessary on a ext3fs. However they mention the defrag tool:
http://ck.kolivas.org/apps/defrag/
Its an easy solution which works on any mounted filesytem. It searches the filesystem for files and sort them by decending size. Then it copies each file to a new temporary file and replaces the original file if copying was successful. (I found that even archlinux does a similiar solution this with the pacman-optimize script. pacman uses a filesystem based database. It attempts to relocate these small files into one continuous location.) So this seems like a sane solution for optimizing files on an ext3fs.
My usage scenario is like follows: I have multiple files and touch them every day, effectively rewriting each file by updating some values in these files and appending data to them.
I have a disk with about 400GB and it is about 50% full. Checking the fragmentation with the fragcheck tool states something like this:
40% of files are non-contigous, 8.xx fragemnets avg. per file.
After using the defrag tool fragcheck shows something like:
60% of files are non-contigous, 6.xx fragments avg per file.
This does not seem too bad, the avergage fragments per file are smaller, but more files are fragmented. I did use the defrag tool twice, and it does not seems to speed things up.
Has anybody experience in defragmenting an ext3 Filesystem or with the defrag tool?