LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Closed Thread
  Search this Thread
Old 04-17-2007, 04:12 AM   #16
bandwidthjunkie
LQ Newbie
 
Registered: Jan 2007
Location: london
Distribution: Gentoo(amd64) - 2.16.20
Posts: 26

Rep: Reputation: 15

Quote:
Originally Posted by tmcco
3, Rebuild is always the best way, I agree.
What do you mean by rebuild?
 
Old 04-17-2007, 04:16 AM   #17
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Every filesystem is different, so running a defrag program can damage a filesystem if it does not support it. Even it supports a desire filesystem, it still can mess up the structure of the filesystem. Linux caches more than Windows, so defragging while a drive is mounted can be hazardous. It is best to use the filesystem's dump utilities to make a clean non-fragmented filesystem.

Several years ago, I think one of the causes of Windows 98 breaking down on me is the amount of times I defragged the drives. When I used DOS/Windows 3.1, I do not think I ever defrag the drives and I never had to re-install Windows 3.1 or DOS.

If you think defragging is great, it is best to design a filesystem with defragging in mind instead of trying to implement it afterwards.

IMHO, Dumping utilites are ok, but defragging is bad.
 
Old 04-17-2007, 06:03 AM   #18
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by Electro
Every filesystem is different, so running a defrag program can damage a filesystem if it does not support it. Even it supports a desire filesystem, it still can mess up the structure of the filesystem. Linux caches more than Windows, so defragging while a drive is mounted can be hazardous. It is best to use the filesystem's dump utilities to make a clean non-fragmented filesystem.

Several years ago, I think one of the causes of Windows 98 breaking down on me is the amount of times I defragged the drives. When I used DOS/Windows 3.1, I do not think I ever defrag the drives and I never had to re-install Windows 3.1 or DOS.

If you think defragging is great, it is best to design a filesystem with defragging in mind instead of trying to implement it afterwards.

IMHO, Dumping utilites are ok, but defragging is bad.
"Dumping utilites are ok"

I can't agree with you more, if you look through the code, you'll find that this is what "defragfs" doing: cp/rm/sync. Such operations is file-system independent and should not harm, and more:

1, it could analysis/defrag all file-system not only ext2/3.
2, it could analysis/defrag directories not only partitions.
3, it could tell you how much file and which file you need defrag(and fragments counting of course), you just tell it "YES" so it will "dump" those file instead of the whole file-system for you.

Further, I think nobody in the world was able to build such a "fragment-free" file-system.
 
Old 04-17-2007, 06:07 AM   #19
nirmaltom
Member
 
Registered: Jun 2005
Location: India
Distribution: Redhat,Fedora,DSL,Ubuntu
Posts: 238

Rep: Reputation: 30
hi,
bandwidthjunkie said,
Quote:
What do you mean by rebuild?
Nothing but recreating the file system using mkfs after taking backup.Since the file system is newly created , it will not be fragmented at all.
Electro said,
Quote:
Even it supports a desire filesystem, it still can mess up the structure of the filesystem
its really true, i accept it.
regards,
Nirmal Tom
 
Old 04-17-2007, 06:14 AM   #20
nirmaltom
Member
 
Registered: Jun 2005
Location: India
Distribution: Redhat,Fedora,DSL,Ubuntu
Posts: 238

Rep: Reputation: 30
hi,
tmcco said,
Quote:
Such operations is file-system independent and should not harm
e2fsck is specifically designed for ext2 and is in use for many years, but happens when u do it on the file system that is mounted?So , only the users of the program have to say whether its harm or not, by their experience wih it.

by the way do u see any notable performance gain?Or is it needed for heavy file system with a million of files?What for home usage?

regards,
Nirmal Tom.

Last edited by nirmaltom; 04-17-2007 at 06:16 AM.
 
Old 04-17-2007, 07:13 AM   #21
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by nirmaltom
hi,
tmcco said,


e2fsck is specifically designed for ext2 and is in use for many years, but happens when u do it on the file system that is mounted?So , only the users of the program have to say whether its harm or not, by their experience wih it.

by the way do u see any notable performance gain?Or is it needed for heavy file system with a million of files?What for home usage?

regards,
Nirmal Tom.
1, If you do think operations like "cp/rm/sync" would harm a file-system structure, I believe you can hardly use Linux ever.

2, It does not need millions of files, actually in my tests even 30000 mixed size files and 50% free space could cause ext3 23% fragmented and performance degrade to 60%-70%, look at here: http://defragfs.sourceforge.net/theory.html
 
Old 04-17-2007, 09:07 AM   #22
nirmaltom
Member
 
Registered: Jun 2005
Location: India
Distribution: Redhat,Fedora,DSL,Ubuntu
Posts: 238

Rep: Reputation: 30
hi,
Quote:
my tests even 30000 mixed size files and 50% free space could cause ext3 23% fragmented and performance degrade to 60%-70%
sounds high
let me try mine

regards,
Nirmal Tom.
 
Old 04-17-2007, 09:29 AM   #23
nirmaltom
Member
 
Registered: Jun 2005
Location: India
Distribution: Redhat,Fedora,DSL,Ubuntu
Posts: 238

Rep: Reputation: 30
hi,
On theory page,
Quote:
Here comes the test on fragmentation/performance analysis:
(if you want take the test yourself, please see HERE)
The file-list used in the test is HERE
when i run , run.pl in th test , it fetches me the following,
[root@server frags]# ./run.pl
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /home
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /dev/VolGroup00/fed6_home /home
Preparing ext3o partitionumount: /home: device is busy
umount: /home: device is busy
mke2fs 1.39 (29-May-2006)
/dev/VolGroup00/fed6_home is mounted; will not make a filesystem here!
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
Making filesDoing Fragmentation testumount: /home: device is busy
umount: /home: device is busy
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
umount: /mnt/mktest.tmp: not mounted
cat: ./fpass-read-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-read-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-read-3.tmp^Xcat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-read-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-read-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory

[root@server frags]#

regards,
nirmal tom.
 
Old 04-17-2007, 10:20 AM   #24
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by nirmaltom
hi,
On theory page,


when i run , run.pl in th test , it fetches me the following,
[root@server frags]# ./run.pl
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /home
Usage: run.pl PARTITION MOUNTPOINT at ./run.pl line 8.
[root@server frags]# ./run.pl /dev/VolGroup00/fed6_home /home
Preparing ext3o partitionumount: /home: device is busy
umount: /home: device is busy
mke2fs 1.39 (29-May-2006)
/dev/VolGroup00/fed6_home is mounted; will not make a filesystem here!
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
Making filesDoing Fragmentation testumount: /home: device is busy
umount: /home: device is busy
mount: /dev/VolGroup00/fed6_home already mounted or /home busy
mount: according to mtab, /dev/mapper/VolGroup00-fed6_home is already mounted on /home
umount: /mnt/mktest.tmp: not mounted
cat: ./fpass-read-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-write-1.tmp: No such file or directory
cat: ./fpass-read-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-write-2.tmp: No such file or directory
cat: ./fpass-read-3.tmp^Xcat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-write-3.tmp: No such file or directory
cat: ./fpass-read-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-write-4.tmp: No such file or directory
cat: ./fpass-read-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory
cat: ./fpass-write-5.tmp: No such file or directory

[root@server frags]#

regards,
nirmal tom.
Oh, no,no,no...... seems you're doing things dangerous.

Please kindly read the "README" in "frags.tar.bz2" first.
And please note:

1, run on an empty unmounted partition instead of things like "/dev/mapper/VolGroup00-fed6_home".

2, the mount point should not be occupied, something like "/mnt/tmp" maybe good.(if you have)

3, please have a look at run.pl before any test, modify the script yourself to fit your requirements.(the file-system type, mount options... you want to test)

4, you will need the "sample pattern" files which informations used for creating files and read/write/remove..., you may use the scripts in "frags.tar.bz2" create your own samples or just download my samples here: http://defragfs.sourceforge.net/fpass.tar.bz2 (I suggest AT LEAST 150MB free RAM available running "run.pl" on this sample)

5, the "run.pl" on my samples could cost up to 2 hours running, and generate a "result-xxx" file containing seconds cost in each loops of tests.

Wish this would be helpful!
 
Old 04-18-2007, 12:52 AM   #25
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
tmcco, your program defragfs is a fake defragger because it does not go to the source of problem instead it uses a higher level that does not do any defrag at all. All it does it copies files from one location and then to the next. This is not defrag rather an idiot program that copies files from one drive or partition and then to another drive or partition.
 
Old 04-18-2007, 01:31 AM   #26
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by Electro
tmcco, your program defragfs is a fake defragger because it does not go to the source of problem instead it uses a higher level that does not do any defrag at all. All it does it copies files from one location and then to the next. This is not defrag rather an idiot program that copies files from one drive or partition and then to another drive or partition.
Hehe, at least it proves you've read the code but not running at least once.

I said earlier that the defragfs does "cp/rm/sync" which is file-system independent and should not harm. AND FURTHER MORE: the "cp/rm/sync" process DO WORKS on reducing fragmentation:: on my tests, the ext3 partition fragmentation could decrease from 23% to 13% after using defragfs; one single large file downloaded by amule/bt(like a movie) have got about 50000 fragments before, and after "cp/rm/sync" the fragments decreased to about 2000. (or I would say the "cp/rm/sync" process makes file-system re-allocate space to reduce fragments)

Read the sample output please, and see how files defragmented after defragfs: http://defragfs.sourceforge.net/sample.txt
 
Old 04-18-2007, 06:26 AM   #27
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,462
Blog Entries: 7

Rep: Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561
Quote:
Originally Posted by tmcco
2, Even if you are using a extent-based file system(JFS/XFS/Reiser3,4), have you ever thought on the "Create/Delete/Create..." situation? Something like:
. . . . . . . . . . (10free blocks, each 4KB)
1 1 2 2 3 3 3 4 5 . (you've 5 files allocated "extent", left 1 block)
1 1 . . 3 3 3 . 5 . (delete 2 and 4)
1 1 6 6 3 3 3 6 5 6 (a new file "6" allocates 4 blocks, which does not continuously(3 frags)
To be honest, I had put file system fragmentation out of my mind, because everything you read on the internet tells you to forget about it.

However, your post triggered my curiosity and made me do some investigating.

I don't have Windows on my home PC, so I checked the fragmentation status of my PC at work which runs Windows XP (SP2). It was last defragmented in November 2004 and with 75% of the disk space free (there are 45 gigs free and the drive is 60 gigs), it is showing almost 30% fragmentation.

On my home PC, I run XFS under Linux. This computer was bought in component parts and assembled by yours truly in September 2004. The drives have not been re-formatted since it was originally set up.

Code:
root@here:~# xfs_db -c frag -r /dev/sda2
actual 193151, ideal 191802, fragmentation factor 0.70%

root@here:~# xfs_db -c frag -r /dev/sda3
actual 286038, ideal 285156, fragmentation factor 0.31%

root@here:~# xfs_db -c frag -r /dev/sda4
actual 340917, ideal 338396, fragmentation factor 0.74%

root@here:~# xfs_db -c frag -r /dev/sdb2
actual 21935, ideal 21666, fragmentation factor 1.23%
Code:
root@here:~# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2             24406832   4460496  19946336  19% /
/dev/sda3             24406832   5008220  19398612  21% /mnt/spare
/dev/sda4            143470648 111125484  32345164  78% /home
/dev/sdb2            121038792  56530628  64508164  47% /mnt/data
The real surprise here is the partition upon which I've mounted /home. It gets treated pretty roughly. Files are downloaded/copied/created & deleted daily on that partition. It has not been re-formatted or defragged since September 2004 and is currently 78% full and shows only 0.74% fragmentation. It contains files ranging in size from a few bytes, to Slackware DVD isos.

By comparison with Windows, file fragmentation under Linux barely exists. My experience over the past 31 months with Linux and 29 months with Windows has proven something to me which I have known since 1999, but never bothered checking.

Sorry tmcco, but your script won't be finding a home on my computer!

Last edited by rkelsen; 04-18-2007 at 06:43 AM.
 
Old 04-18-2007, 07:56 AM   #28
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by rkelsen
To be honest, I had put file system fragmentation out of my mind, because everything you read on the internet tells you to forget about it.

However, your post triggered my curiosity and made me do some investigating.

I don't have Windows on my home PC, so I checked the fragmentation status of my PC at work which runs Windows XP (SP2). It was last defragmented in November 2004 and with 75% of the disk space free (there are 45 gigs free and the drive is 60 gigs), it is showing almost 30% fragmentation.

On my home PC, I run XFS under Linux. This computer was bought in component parts and assembled by yours truly in September 2004. The drives have not been re-formatted since it was originally set up.

Code:
root@here:~# xfs_db -c frag -r /dev/sda2
actual 193151, ideal 191802, fragmentation factor 0.70%

root@here:~# xfs_db -c frag -r /dev/sda3
actual 286038, ideal 285156, fragmentation factor 0.31%

root@here:~# xfs_db -c frag -r /dev/sda4
actual 340917, ideal 338396, fragmentation factor 0.74%

root@here:~# xfs_db -c frag -r /dev/sdb2
actual 21935, ideal 21666, fragmentation factor 1.23%
Code:
root@here:~# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2             24406832   4460496  19946336  19% /
/dev/sda3             24406832   5008220  19398612  21% /mnt/spare
/dev/sda4            143470648 111125484  32345164  78% /home
/dev/sdb2            121038792  56530628  64508164  47% /mnt/data
The real surprise here is the partition upon which I've mounted /home. It gets treated pretty roughly. Files are downloaded/copied/created & deleted daily on that partition. It has not been re-formatted or defragged since September 2004 and is currently 78% full and shows only 0.74% fragmentation. It contains files ranging in size from a few bytes, to Slackware DVD isos.

By comparison with Windows, file fragmentation under Linux barely exists. My experience over the past 31 months with Linux and 29 months with Windows has proven something to me which I have known since 1999, but never bothered checking.

Sorry tmcco, but your script won't be finding a home on my computer!
Well:

1, you must know that different tools will report different values, cause they have different algorithms.

2, what file-system is it on your XP? what tool are you using on measuring that 30%?

3, the XFS xfs_db calculates fragmentation by: total_extents / ideal_extens, where defragfs calculates by: fragmented_files / total_files. that makes different, I believe running defragfs on your file-system would produce different numbers than xfs_db.

4, without a doubt, XFS do fragment over time, and performance dropped pretty serious, you might want look at here: http://defragfs.sourceforge.net/theory2.html

5, what xfs_fsr(the XFS-special defragmenter) do is much the same as the defragfs I provided.

6, the defragfs will not accurate every time, for reiser3/tails it reported fragments too low, for reiser4/tails it reported fragments too high, other file-system would be accurate.
 
Old 04-18-2007, 08:50 PM   #29
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,462
Blog Entries: 7

Rep: Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561Reputation: 2561
Quote:
Originally Posted by tmcco
1, you must know that different tools will report different values, cause they have different algorithms.
Fragmentation is fragmentation.
Quote:
Originally Posted by tmcco
2, what file-system is it on your XP?
NTFS
Quote:
Originally Posted by tmcco
what tool are you using on measuring that 30%?
Norton Speed Disk
Quote:
Originally Posted by tmcco
3, the XFS xfs_db calculates fragmentation by: total_extents / ideal_extens, where defragfs calculates by: fragmented_files / total_files. that makes different, I believe running defragfs on your file-system would produce different numbers than xfs_db.
...
5, what xfs_fsr(the XFS-special defragmenter) do is much the same as the defragfs I provided.
I'll re-check my filesystems tonight with xfs_fsr and see what result I get and post them here.
Quote:
Originally Posted by tmcco
4, without a doubt, XFS do fragment over time, and performance dropped pretty serious, you might want look at here: http://defragfs.sourceforge.net/theory2.html
Your results seem to differ to mine. As I mentioned previously, my /home partition cops a flogging and my system still feels as snappy as day 1. xfs_db reported 0.74% fragmentation on that partition.
Quote:
Originally Posted by tmcco
6, the defragfs will not accurate every time, for reiser3/tails it reported fragments too low, for reiser4/tails it reported fragments too high, other file-system would be accurate.
I'm still not convinced.
 
Old 04-18-2007, 09:53 PM   #30
studioj
Member
 
Registered: Oct 2006
Posts: 460

Rep: Reputation: 31
i resisted this thread at first because it felt like a troll but since it's still alive here i go.

the problem with your tests is as i see it
Quote:
5, Read all files.(Sequently-Read)
OK yea sequential reads get slower DUHH ! When you artificially cause fragmentation on a nearly full disk partition like your test does. but why do we care ? (its an atrifical test designed do create an artificial effect im afraid to the end of promoting the defrag program. when in real world do we do sequential reads like that ?(never). its a totally false benchmark. The giveaway is the way the page theory.html starts out talking about "lies". trying to appeal to emotional responses like advertising does. actual theory would never begin by tanking about "lies, misunderstandings around the world" thats not a theory thats an add slogan and the two things are as far apart as any two thing can be.

real world disk access is inherently fragmented and not sequential so file fragmentation is a non issue.
 
  


Closed Thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
defragmentation tool czezz Linux - Software 1 02-04-2005 09:35 AM
is there any tool for cracking ext2 encrypted file system gadekar Linux - Security 1 08-18-2003 11:52 PM
HDD Defragmentation tool ? membrax Linux - Software 3 01-22-2003 04:30 AM
Defragmentation of File System mikeshn Linux - General 2 04-19-2002 09:46 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 10:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration