Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Every filesystem is different, so running a defrag program can damage a filesystem if it does not support it. Even it supports a desire filesystem, it still can mess up the structure of the filesystem. Linux caches more than Windows, so defragging while a drive is mounted can be hazardous. It is best to use the filesystem's dump utilities to make a clean non-fragmented filesystem.
Several years ago, I think one of the causes of Windows 98 breaking down on me is the amount of times I defragged the drives. When I used DOS/Windows 3.1, I do not think I ever defrag the drives and I never had to re-install Windows 3.1 or DOS.
If you think defragging is great, it is best to design a filesystem with defragging in mind instead of trying to implement it afterwards.
IMHO, Dumping utilites are ok, but defragging is bad.
Every filesystem is different, so running a defrag program can damage a filesystem if it does not support it. Even it supports a desire filesystem, it still can mess up the structure of the filesystem. Linux caches more than Windows, so defragging while a drive is mounted can be hazardous. It is best to use the filesystem's dump utilities to make a clean non-fragmented filesystem.
Several years ago, I think one of the causes of Windows 98 breaking down on me is the amount of times I defragged the drives. When I used DOS/Windows 3.1, I do not think I ever defrag the drives and I never had to re-install Windows 3.1 or DOS.
If you think defragging is great, it is best to design a filesystem with defragging in mind instead of trying to implement it afterwards.
IMHO, Dumping utilites are ok, but defragging is bad.
"Dumping utilites are ok"
I can't agree with you more, if you look through the code, you'll find that this is what "defragfs" doing: cp/rm/sync. Such operations is file-system independent and should not harm, and more:
1, it could analysis/defrag all file-system not only ext2/3.
2, it could analysis/defrag directories not only partitions.
3, it could tell you how much file and which file you need defrag(and fragments counting of course), you just tell it "YES" so it will "dump" those file instead of the whole file-system for you.
Further, I think nobody in the world was able to build such a "fragment-free" file-system.
Nothing but recreating the file system using mkfs after taking backup.Since the file system is newly created , it will not be fragmented at all.
Electro said,
Quote:
Even it supports a desire filesystem, it still can mess up the structure of the filesystem
Such operations is file-system independent and should not harm
e2fsck is specifically designed for ext2 and is in use for many years, but happens when u do it on the file system that is mounted?So , only the users of the program have to say whether its harm or not, by their experience wih it.
by the way do u see any notable performance gain?Or is it needed for heavy file system with a million of files?What for home usage?
e2fsck is specifically designed for ext2 and is in use for many years, but happens when u do it on the file system that is mounted?So , only the users of the program have to say whether its harm or not, by their experience wih it.
by the way do u see any notable performance gain?Or is it needed for heavy file system with a million of files?What for home usage?
regards,
Nirmal Tom.
1, If you do think operations like "cp/rm/sync" would harm a file-system structure, I believe you can hardly use Linux ever.
2, It does not need millions of files, actually in my tests even 30000 mixed size files and 50% free space could cause ext3 23% fragmented and performance degrade to 60%-70%, look at here: http://defragfs.sourceforge.net/theory.html
Here comes the test on fragmentation/performance analysis:
(if you want take the test yourself, please see HERE)
The file-list used in the test is HERE
when i run , run.pl in th test , it fetches me the following,
[root@server frags]# ./run.pl
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /home
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /dev/VolGroup00/fed6_home /home
Preparing ext3o partitionumount: /home: device is busy
umount: /home: device is busy
mke2fs 1.39 (29-May-2006)
/dev/VolGroup00/fed6_home is mounted; will not make a filesystem here!
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
Making filesDoing Fragmentation testumount: /home: device is busy
umount: /home: device is busy
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
umount: /mnt/mktest.tmp: not mounted
cat: ./fpass-read-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-read-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-read-3.tmp^Xcat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-read-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-read-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory
when i run , run.pl in th test , it fetches me the following,
[root@server frags]# ./run.pl
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /home
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /dev/VolGroup00/fed6_home /home
Preparing ext3o partitionumount: /home: device is busy
umount: /home: device is busy
mke2fs 1.39 (29-May-2006)
/dev/VolGroup00/fed6_home is mounted; will not make a filesystem here!
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
Making filesDoing Fragmentation testumount: /home: device is busy
umount: /home: device is busy
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
umount: /mnt/mktest.tmp: not mounted
cat: ./fpass-read-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-read-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-read-3.tmp^Xcat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-read-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-read-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory
[root@server frags]#
regards,
nirmal tom.
Oh, no,no,no...... seems you're doing things dangerous.
Please kindly read the "README" in "frags.tar.bz2" first.
And please note:
1, run on an empty unmounted partition instead of things like "/dev/mapper/VolGroup00-fed6_home".
2, the mount point should not be occupied, something like "/mnt/tmp" maybe good.(if you have)
3, please have a look at run.pl before any test, modify the script yourself to fit your requirements.(the file-system type, mount options... you want to test)
4, you will need the "sample pattern" files which informations used for creating files and read/write/remove..., you may use the scripts in "frags.tar.bz2" create your own samples or just download my samples here: http://defragfs.sourceforge.net/fpass.tar.bz2 (I suggest AT LEAST 150MB free RAM available running "run.pl" on this sample)
5, the "run.pl" on my samples could cost up to 2 hours running, and generate a "result-xxx" file containing seconds cost in each loops of tests.
tmcco, your program defragfs is a fake defragger because it does not go to the source of problem instead it uses a higher level that does not do any defrag at all. All it does it copies files from one location and then to the next. This is not defrag rather an idiot program that copies files from one drive or partition and then to another drive or partition.
tmcco, your program defragfs is a fake defragger because it does not go to the source of problem instead it uses a higher level that does not do any defrag at all. All it does it copies files from one location and then to the next. This is not defrag rather an idiot program that copies files from one drive or partition and then to another drive or partition.
Hehe, at least it proves you've read the code but not running at least once.
I said earlier that the defragfs does "cp/rm/sync" which is file-system independent and should not harm. AND FURTHER MORE: the "cp/rm/sync" process DO WORKS on reducing fragmentation:: on my tests, the ext3 partition fragmentation could decrease from 23% to 13% after using defragfs; one single large file downloaded by amule/bt(like a movie) have got about 50000 fragments before, and after "cp/rm/sync" the fragments decreased to about 2000. (or I would say the "cp/rm/sync" process makes file-system re-allocate space to reduce fragments)
2, Even if you are using a extent-based file system(JFS/XFS/Reiser3,4), have you ever thought on the "Create/Delete/Create..." situation? Something like:
. . . . . . . . . . (10free blocks, each 4KB)
1 1 2 2 3 3 3 4 5 . (you've 5 files allocated "extent", left 1 block)
1 1 . . 3 3 3 . 5 . (delete 2 and 4)
1 1 6 6 3 3 3 6 5 6 (a new file "6" allocates 4 blocks, which does not continuously(3 frags)
To be honest, I had put file system fragmentation out of my mind, because everything you read on the internet tells you to forget about it.
However, your post triggered my curiosity and made me do some investigating.
I don't have Windows on my home PC, so I checked the fragmentation status of my PC at work which runs Windows XP (SP2). It was last defragmented in November 2004 and with 75% of the disk space free (there are 45 gigs free and the drive is 60 gigs), it is showing almost 30% fragmentation.
On my home PC, I run XFS under Linux. This computer was bought in component parts and assembled by yours truly in September 2004. The drives have not been re-formatted since it was originally set up.
The real surprise here is the partition upon which I've mounted /home. It gets treated pretty roughly. Files are downloaded/copied/created & deleted daily on that partition. It has not been re-formatted or defragged since September 2004 and is currently 78% full and shows only 0.74% fragmentation. It contains files ranging in size from a few bytes, to Slackware DVD isos.
By comparison with Windows, file fragmentation under Linux barely exists. My experience over the past 31 months with Linux and 29 months with Windows has proven something to me which I have known since 1999, but never bothered checking.
Sorry tmcco, but your script won't be finding a home on my computer!
To be honest, I had put file system fragmentation out of my mind, because everything you read on the internet tells you to forget about it.
However, your post triggered my curiosity and made me do some investigating.
I don't have Windows on my home PC, so I checked the fragmentation status of my PC at work which runs Windows XP (SP2). It was last defragmented in November 2004 and with 75% of the disk space free (there are 45 gigs free and the drive is 60 gigs), it is showing almost 30% fragmentation.
On my home PC, I run XFS under Linux. This computer was bought in component parts and assembled by yours truly in September 2004. The drives have not been re-formatted since it was originally set up.
The real surprise here is the partition upon which I've mounted /home. It gets treated pretty roughly. Files are downloaded/copied/created & deleted daily on that partition. It has not been re-formatted or defragged since September 2004 and is currently 78% full and shows only 0.74% fragmentation. It contains files ranging in size from a few bytes, to Slackware DVD isos.
By comparison with Windows, file fragmentation under Linux barely exists. My experience over the past 31 months with Linux and 29 months with Windows has proven something to me which I have known since 1999, but never bothered checking.
Sorry tmcco, but your script won't be finding a home on my computer!
Well:
1, you must know that different tools will report different values, cause they have different algorithms.
2, what file-system is it on your XP? what tool are you using on measuring that 30%?
3, the XFS xfs_db calculates fragmentation by: total_extents / ideal_extens, where defragfs calculates by: fragmented_files / total_files. that makes different, I believe running defragfs on your file-system would produce different numbers than xfs_db.
5, what xfs_fsr(the XFS-special defragmenter) do is much the same as the defragfs I provided.
6, the defragfs will not accurate every time, for reiser3/tails it reported fragments too low, for reiser4/tails it reported fragments too high, other file-system would be accurate.
1, you must know that different tools will report different values, cause they have different algorithms.
Fragmentation is fragmentation.
Quote:
Originally Posted by tmcco
2, what file-system is it on your XP?
NTFS
Quote:
Originally Posted by tmcco
what tool are you using on measuring that 30%?
Norton Speed Disk
Quote:
Originally Posted by tmcco
3, the XFS xfs_db calculates fragmentation by: total_extents / ideal_extens, where defragfs calculates by: fragmented_files / total_files. that makes different, I believe running defragfs on your file-system would produce different numbers than xfs_db.
...
5, what xfs_fsr(the XFS-special defragmenter) do is much the same as the defragfs I provided.
I'll re-check my filesystems tonight with xfs_fsr and see what result I get and post them here.
Your results seem to differ to mine. As I mentioned previously, my /home partition cops a flogging and my system still feels as snappy as day 1. xfs_db reported 0.74% fragmentation on that partition.
Quote:
Originally Posted by tmcco
6, the defragfs will not accurate every time, for reiser3/tails it reported fragments too low, for reiser4/tails it reported fragments too high, other file-system would be accurate.
i resisted this thread at first because it felt like a troll but since it's still alive here i go.
the problem with your tests is as i see it
Quote:
5, Read all files.(Sequently-Read)
OK yea sequential reads get slower DUHH ! When you artificially cause fragmentation on a nearly full disk partition like your test does. but why do we care ? (its an atrifical test designed do create an artificial effect im afraid to the end of promoting the defrag program. when in real world do we do sequential reads like that ?(never). its a totally false benchmark. The giveaway is the way the page theory.html starts out talking about "lies". trying to appeal to emotional responses like advertising does. actual theory would never begin by tanking about "lies, misunderstandings around the world" thats not a theory thats an add slogan and the two things are as far apart as any two thing can be.
real world disk access is inherently fragmented and not sequential so file fragmentation is a non issue.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.