LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Closed Thread
  Search this Thread
Old 04-28-2007, 10:52 PM   #91
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15

Quote:
Originally Posted by tytso
Hi, it's been pointed out to me that there's been this thread about fragmentation, and it's apparently degenerated a fair amount, and furthermore that my name has been dragged into this. So let me try to clear up some points.

First of all, if someone wants to pull out Dean Anderson's opinion of me, some explanation of the background is in order. Since I have previously served as chair of the IP Security Working Group and as a member of the security area directorate of the Internet Engineering Task Force (IETF, the standards body of the Internet), I was asked to serve as one of the two IETF mailing list "Sergeant at Arms", which is responsible for enforcing the mailing list code of conduct. In that role, I asked Mr. Dean Anderson to refrain from violating those guidelines, and he created said web page as a result. Some time later, as a result of his continued violation of these guidelines, Mr. Anderson was subsequently banned from all IETF mailing lists, not by my decision (I, in my role an IETF Sergeant at Arms, report to the IETF chair, and can be overruled by him, although to date this has never happened), but by the Internet Engineering Steering Group. Their decision to ban Mr. Anderson was taken after a request from the IETF community, following a process defined by RFC 3683. The reasons for this ban can be found here and here . Mr Dean Anderson subsequently appealed this decision to the Internet Architecture Board, who upheld the decision by the IESG. Folks should feel free to look at the decisions reached by the leadership of the IETF, and decide whether they come down on the side of them (and me), or that of Mr. Dean Anderson.

Secondly, the note referenced by tmcco was mine. However, it should be noted that it was in reply to his benchmarks about fragmentation. It is certainly true that Linux filesystems can suffer from fragmentation, and his benchmark fairly compared multiple filesystems given a particular workload --- which happened to be to involve randomly deleting some number of files, and writing some other files of different random lengths, with all files in the same directory. Whether or not the workload as measured by his benchmark is a fair one is a different story. Indeed, it's pretty obvious that it doesn't represent an accurate model of real life usage for most users. My note was trying to encourage him to develop a better set of benchmarks.

As far as his specific program is concerned, I'm not particularly enthusiastic about pure userspace progams that attempt to defragment a filesystem. There are significant locking problems that you have to worry about --- what if the file is being accessed and modified while a userspace defragger is copying and rewriting the file? Secondly, these programs don't completely eliminate fragmentation, and indeed, to the extent they work, it is because Linux (and most modern Unix filesystems) are fragmentation resistant. (Note that I didn't say fragmentation-proof!)

So my preference is to improve existing filesystems to make them more fragmentation resistant at least for common, real-life workloads, instead of writing fragmentation programs. We should be able to do better than Microsoft Windows, and not require people run slow defragmentation programs. Of course, the ethos of open source software is that people can work on whatever they want, and if want to work on defragmentation programs, they are perfectly free to --- but it's simply not something I'm interested in myself. However, for people who do want to work on defragmenters, my recommendation is to either create off-line defraggers (such as fixing and updating e2defrag, which is an ext2-specific defragger that only works on 1k blocksizes, doesn't support some of the newer ext3/4 filesystems, and has the sad problem that if it gets interrupted mid-defrag, your filesystem is completely toasted), or to create on-line defraggers that have kernel support (such as the ext4 defragger being worked on by Takashi Sato-san from NEC). My personal interest in the kernel support for on-line defragging is to be able to do on-line shrinking of filesystems, which means that the kernel support for on-line defrag can support multiple problems, which is always a sign of a good design.

So the net of this is that my comments were not meant as an unlimited endorsement of tmcco's ideas, nor (in particular) of his selected defragmentation approach. Do I hope that he might do some more work in this area, however? Of course! In order to do that, though, it means we need to encourage each other in areas where we are on the right track, and gently correct people when they are wrong. It is for that reason that I tried to get him to focus on better benchmarks, and in fact I spent no time criticizing his defragmentation program. This is first of all because he didn't ask me to comment on it, and secondly, because it's better to encourage people in areas where you feel they might be able to contribute useful work, as opposed to tearing them down in other areas.

So let me end this by quoting from Rodney King, "Can't we all just get along?"

Regards,

-- Ted
1, This is the reason why I found Mr Ts'o's advice much more constructive, he is the master of file-system, while someone the master of personal attack.

2, I indeed considering Mr Ts'o's advice seriously(those benchmark should reflect the worst case how a file-system would fragmented), the new much controllable aging benchmark utility is at: http://sourceforge.net/projects/ngmark , warmly welcome any constructive suggestions always.

3, I agree defragmentation is complicated(locking etc.), one benefit of a user-space defragmenter is that it is file-system independent, so one should aware those files are not currently using.

4, Those profile pointed to www.tmcco.com is terribly out-dated, so never mind.

Last edited by tmcco; 04-29-2007 at 05:22 AM.
 
Old 04-28-2007, 10:55 PM   #92
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by tytso
For the record, no, tmcco did not give me a link to this thread, and I never sent him any statements or comments about this thread and the comments made by him or others on this discussion for the board for the simple reason that I didn't know about it. I did not become aware of this discussion until lazlow sent me e-mail about it on Friday morning. My apologies for not replying until now; I'm pretty busy these days.

Regards,

-- Ted
For a reference, this is the original message I send to Mr Ts'o for a comment on those pages:

Von: xucanhao@gmail.com Xu CanHao
An: card@masi.ibp.fr, sct@redhat.com, tytso@mit.edu, adilger@clusterfs.com
Datum: Wed, 25 Apr 2007 13:18:57 +0800
Betreff: Hi! I encountered some performance degradation caused by fragmentation of ext2/3, any ideas?

Hello!

I encountered some performance degradation caused by fragmentation of
ext2/3 during my research, any ideas? The link is here:
http://defragfs.sourceforge.net/theory.html
http://defragfs.sourceforge.net/theory1.html
http://defragfs.sourceforge.net/theory2.html
http://defragfs.sourceforge.net/defrag.html

Thanks!

Last edited by tmcco; 04-29-2007 at 05:26 AM.
 
Old 05-03-2007, 12:17 PM   #93
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
Sorry for not responding to this thread earlier. I have been dealing with a spam attack which is coming from multiple machines in Germany. This and other recent events have convinced me to investigate anything that sets off my "spidey sense".

In this thread I have noticed some contradicting things that are mutually exclusive(one is true or the other is true, but not both). A couple of easy examples:

In post #71 point #7. it is stated the Mr. Ts'o does not care for some of our posts, yet in post #89 Mr. Ts'o states that at that time(post #71) he was not even aware of this thread.

In Tmcco's profile, which was originated in May 2003. it states a home page of www.tmcco.com. If one looks at http://web.archive.org/web/*/http://tmcco.com (a site that archive old web pages) you can clearly see that the site was Mr. Young's (the current owner of the site) at least by April 2001. Further, in a phone conversation (email confirmation pending) Mr. Young indicates that he has owned the domain since 1997 and has no affiliation with Xu CanHao. This begs the question: why would someone claim a home page that was clearly not theirs for at least two years prior(per archive.org) and probably not since 1997 (per Mr. Young)? If it had merely been a typo, most of us would have corrected that when it was pointed out earlier in this thread(post #85). I am not saying we do not make mistakes. Clearly I made one with my post #76. When my mistake was pointed out to me, I did not delete the post (attempt to hide it). I posted an addition to the post to point to a post that gave more information and apologized to Mr. Ts'o.


When I install a piece of software on my computer, I rely on the credibility of the author to insure that the code does what he says it does. With the size and complexity of todays software I would venture a guess that 99% of us do. If an author intentionally tries to mislead us about facts in one venue (say a forum), how can we trust the software or the data that was produced by that author?


Lazlow
 
Old 05-03-2007, 06:20 PM   #94
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,457
Blog Entries: 7

Rep: Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560Reputation: 2560
Quote:
Originally Posted by lazlow
If an author intentionally tries to mislead us about facts in one venue (say a forum), how can we trust the software or the data that was produced by that author?
What amuses me Lazlow is that Mr. Ts'o's post in this thread re-iterates what many people here have said, for example:
Quote:
Originally Posted by tytso
It is certainly true that Linux filesystems can suffer from fragmentation, and his benchmark fairly compared multiple filesystems given a particular workload ... Whether or not the workload as measured by his benchmark is a fair one is a different story. Indeed, it's pretty obvious that it doesn't represent an accurate model of real life usage for most users.
Quote:
Originally Posted by tytso
As far as his specific program is concerned, I'm not particularly enthusiastic about pure userspace progams that attempt to defragment a filesystem.
Quote:
Originally Posted by tytso
these programs don't completely eliminate fragmentation, and indeed, to the extent they work, it is because Linux (and most modern Unix filesystems) are fragmentation resistant.
There may be some flaming and personal attacks in this thread, but if one was to read through all of that, one would see that all of the above points made by Mr. Ts'o had already been made. It certainly appears that Mr. CanHao has chosen to ignore all of the advice given, and has failed to properly understand the position of Mr. Ts'o on this issue.
 
Old 05-03-2007, 06:24 PM   #95
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
There may be some flaming and personal attacks in this thread,
Guilty, your honor! If I offended some, then I am sorry to you. However, I don't see that I said anything that I regret. The guy is misrepresenting himself, his "code", and his affiliations with others.
 
Old 05-04-2007, 06:58 AM   #96
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by lazlow
In post #71 point #7. it is stated the Mr. Ts'o does not care for some of our posts, yet in post #89 Mr. Ts'o states that at that time(post #71) he was not even aware of this thread.
It was misunderstanding: what I mean that time is Mr. Ts'o did not post any rude/flame words in his reply mail to me, not to the forum.

Quote:
Originally Posted by lazlow
In Tmcco's profile, which was originated in May 2003. it states a home page of www.tmcco.com. If one looks at http://web.archive.org/web/*/http://tmcco.com (a site that archive old web pages) you can clearly see that the site was Mr. Young's (the current owner of the site) at least by April 2001. Further, in a phone conversation (email confirmation pending) Mr. Young indicates that he has owned the domain since 1997 and has no affiliation with Xu CanHao. This begs the question: why would someone claim a home page that was clearly not theirs for at least two years prior(per archive.org) and probably not since 1997 (per Mr. Young)? If it had merely been a typo, most of us would have corrected that when it was pointed out earlier in this thread(post #85). I am not saying we do not make mistakes. Clearly I made one with my post #76. When my mistake was pointed out to me, I did not delete the post (attempt to hide it). I posted an addition to the post to point to a post that gave more information and apologized to Mr. Ts'o.
You are really a careful man indeed. The fact is: when I firstly registered an account "tmcco" in 2003, I filled the home page with "www.tmcco.com" omnivorously. The site has nothing to do with me in the past, now, or may be in the future. (OMG: I think this is completely out of subject)

Quote:
Originally Posted by lazlow
When I install a piece of software on my computer, I rely on the credibility of the author to insure that the code does what he says it does. With the size and complexity of todays software I would venture a guess that 99% of us do. If an author intentionally tries to mislead us about facts in one venue (say a forum), how can we trust the software or the data that was produced by that author?
You should aware that most Linux software is GPLed, including this tiny one. I for myself do not intent to convince everybody in the world believe in me, but please point out if anyone have (technological)(problems / constructive suggestions). (Yes, like what Mr. Ts'o do)
 
Old 05-04-2007, 07:22 AM   #97
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by rkelsen
What amuses me Lazlow is that Mr. Ts'o's post in this thread re-iterates what many people here have said, for example:

There may be some flaming and personal attacks in this thread, but if one was to read through all of that, one would see that all of the above points made by Mr. Ts'o had already been made. It certainly appears that Mr. CanHao has chosen to ignore all of the advice given, and has failed to properly understand the position of Mr. Ts'o on this issue.
Interesting enough: or we'd say--I stand in one side, some of you the other, where Mr. Ts'o the middle:

1,
I: Linux filesystems can suffer from fragmentation.
some of you: Linux filesystems do not suffer from fragmentation.
Mr. Ts'o: Linux filesystems can suffer from fragmentation, but depends.

2,
I: A program designed for Linux file-system defragmentation.
some of you: The program is fake/useless/thick-headed/ridiculous.
Mr. Ts'o: I'm not particularly enthusiastic about pure userspace progams that attempt to defragment a filesystem.
 
Old 05-04-2007, 08:45 AM   #98
tytso
LQ Newbie
 
Registered: Apr 2007
Location: Medford, MA
Distribution: Debian, Ubuntu
Posts: 9

Rep: Reputation: 4
Quote:
Originally Posted by tmcco
Mr. Ts'o: Linux filesystems can suffer from fragmentation, but depends.
Actually, I'd say that Linux filesystems can suffer from fragmentation, but for many (most?) user workloads the fragmentation problem is not a severe performance issue.

Quote:
Originally Posted by tmcco
Mr. Ts'o: I'm not particularly enthusiastic about pure userspace progams that attempt to defragment a filesystem.
OK, maybe I was being too polite. Let me be more explicit. I wouldn't use a userspace defragmentation program on my own systems, and wouldn't recommend that most users waste a lot of time on this issue.

Keep in mind, a defragmentation program takes a lot of time to run, and compared to the amount of time that you would actually save, it's not clear it really is a win in the long run. Even on Windows, where fragmentation is a bigger problem, I wonder how many Windows users have wasted vast amounts of time staring at Norton Disk Optimizer while it defrags their disk, only to have their Excel spreadsheet load 2 seconds faster. Which is great, except that they spent 30-60 minutes waiting for their disk to be defragmented. So was it really worth it in the end? Not to mention how many hours people have wasted debating the merits of fragmentation on this and other forum discussions? ;-)

Also keep in mind that Linux is much more aggressive about caching than Windows, so for common files, once they are in your page cache, the on-disk fragmentation doesn't matter nearly as much. So not surprisingly, the best way to improve performance in the vast majority of cases is to spend a small amount of money and increase the amount of memory in the system in question.

As far as a userspace defragmentation problem, as I've said already, (1) they suffer from locking problems, which are fundamentally insoluble, since there's no way to tell if another program is using the file, possibly in read/write mode, while you are trying to defrag it, (2) they don't completely remove the fragmentation, and (3) you can replicate what they do yourself by using a simple shell command: "tar cfj /var/tmp/defrag-save.tar.bz dir ; rm -rf dir ; tar xfj /var/tmp/defrag-save.tar.bz". And again, to the extent that this works, it's because Unix filesystems are naturaly fragmentation resistant, and you are simply forcing the system to reallocate the blocks for the files in question. Is that a "fake" program? It depends on your definition of pejoratives, I suppose. It certainly isn't anything complicated, and it doesn't always work, and if there are any other programs accessing the directory when you run the procedure, you will probably cause those programs to malfunction. Understanding these issues is probably far more important than settling on whether or not a userspace defrag program is "fake" or "useless" or "thick-headed", at least in my humble opinion.

Last edited by tytso; 05-04-2007 at 10:38 AM.
 
Old 05-04-2007, 09:13 AM   #99
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
How does:

Quote:
what I mean that time is Mr. Ts'o did not post any rude/flame words in his reply mail to me, not to the forum.
Come anywhere close to the meaning of:

Quote:
Ted does not like some of you(which I do not want point out) posting rude/flame words.

"Teds" quote (above) is quite clear. The use of the word posting is certainly meant to indicate a forum of some sort. It is also quite different from what you are now attempting to say it meant.


Quote:
The fact is: when I firstly registered an account "tmcco" in 2003, I filled the home page with "www.tmcco.com" omnivorously. The site has nothing to do with me in the past, now, or may be in the future. (OMG: I think this is completely out of subject)
This is at least very clear. You admit intentionally and knowingly presenting FALSE information. The relivance of this goes to your credibility.

Quote:
You should aware that most Linux software is GPLed, including this tiny one.
What does GPLing software have to do with credibility? One could GPL a virus if one wished. If you were referring to the warranty section, that is there to protect people of good intentions from being sued for an honest mistake. As I have said before, I have no problem with people making an HONEST mistake.

Quote:
or we'd say--I stand in one side, some of you the other, where Mr. Ts'o the middle:
Who is this "we'd"? You seem to be implying that it is you and Mr. Ts'o. If this is what you meant, I think that Mr. Ts'o knowledge of this thread would allow him to speak for himself.


One good thing is that you now seem to understand that calling a cited reference by his/her first name is bad.


Edit: This was being typed as Mr. Ts'o post (above) was being posted.

Last edited by lazlow; 05-04-2007 at 09:16 AM.
 
Old 05-04-2007, 10:23 AM   #100
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by lazlow
How does:



Come anywhere close to the meaning of:




"Teds" quote (above) is quite clear. The use of the word posting is certainly meant to indicate a forum of some sort. It is also quite different from what you are now attempting to say it meant.




This is at least very clear. You admit intentionally and knowingly presenting FALSE information. The relivance of this goes to your credibility.



What does GPLing software have to do with credibility? One could GPL a virus if one wished. If you were referring to the warranty section, that is there to protect people of good intentions from being sued for an honest mistake. As I have said before, I have no problem with people making an HONEST mistake.



Who is this "we'd"? You seem to be implying that it is you and Mr. Ts'o. If this is what you meant, I think that Mr. Ts'o knowledge of this thread would allow him to speak for himself.


One good thing is that you now seem to understand that calling a cited reference by his/her first name is bad.


Edit: This was being typed as Mr. Ts'o post (above) was being posted.
1, The information
You could find "tmcco" not my true name too(and many others), I intend not to put my privacy data to a public forum like this one. So I would suggest you make a distinction between the internet and the real-world.

2, The Appellation
This is an example of "The Clash of Civilizations": the oriental and the western. I follow most of you then, and I apologize if Mr Ts'o feel calling his name offensive.
 
Old 05-04-2007, 10:58 AM   #101
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
I really have no idea what you mean by:

Quote:
1, The information
You could find "tmcco" not my true name too(and many others), I intend not to put my privacy data to a public forum like this one. So I would suggest you make a distinction between the internet and the real-world.

I do not expect you to use your real name here, I doubt very many do (I do not). I have not asked you to post your private data here. Posting false data is very different from not posting your private data(I do not post a home page here.).

Most of the rules of the real world apply to the internet as well. If one lies and is caught one will be exposed. If one claims something is his and it is not, he will be exposed. Twisting someone else's opinions/statements to one's own agenda is generally not tolerated. When people try to "edit history" by repeatedly changing or changing the mean of what was said, one will be exposed. In short, just like in the real world, one will be held accountable for ones actions.

Last edited by lazlow; 05-04-2007 at 11:00 AM.
 
Old 05-04-2007, 11:18 AM   #102
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by tytso
Actually, I'd say that Linux filesystems can suffer from fragmentation, but for many (most?) user workloads the fragmentation problem is not a severe performance issue.
The reason why I started some work on defragmentation is the behavior of some downloader like Firefox/D4X/aMule/bittorrent...: if you download a not-so-small file(e.g. a movie) by any of them, the final completed file could possibly contain huge amount of fragments no matter ext3/reiserfs. And finally it impacts performance greatly.

1, Before:
#filefrag 1.avi
1.avi: 48044 extents found
#time cat 1.avi>/dev/null
real 3m1.478s
user 0m0.020s
sys 0m2.086s

2, After:
#cp 1.avi 2.avi
#filefrag 2.avi
2.avi: 128 extents found
#time cat 2.avi>/dev/null
real 0m25.329s
user 0m0.012s
sys 0m1.329s

Yes, one of the solution is isolate the temporary directory, but it does not eliminate fragmentation. What something more interesting is when I recalled what Internet Explorer doing: when you download a file it firstly goto a temporary directory, and then COPY to its final place which could reduce fragmentation of the file at some levels(as what 2 doing).

Quote:
Originally Posted by tytso
OK, maybe I was being too polite. Let me be more explicit. I wouldn't use a userspace defragmentation program on my own systems, and wouldn't recommend that most users waste a lot of time on this issue.

Keep in mind, a defragmentation program takes a lot of time to run, and compared to the amount of time that you would actually save, it's not clear it really is a win in the long run. Even on Windows, where fragmentation is a larger issue, I wonder how many Windows users have wasted vast amounts of time staring at Norton Disk Optimizer while it defrags their disk, only to have their Excel spreadsheet load 2 seconds faster. Which is great, except that they spent 30-60 minutes waiting for their disk to defragment. So was it really worth it in the end? Not to mention how many hours people have wasted debating the merits of fragmentation on this and other forum discussion? ;-)

Also keep in mind that Linux is much more aggressive about caching than Windows, so for common files, once they are in your page cache, the on-disk fragmentation doesn't matter nearly as much. So not surprisingly, the best way to improve performance in the vast majority of cases is to spend a small amount of money and increase the amount of memory in the system in question.

As far as a userspace defragmentation problem, as I've said already, (1) they suffer from locking problems, which are fundamentally insoluble, since there's no way to tell if another program us using the file, possibly in read/write mode, while you are trying to defrag it, (2) they don't completely remove the fragmentation, and (3) you can replicate what they do yourself by using a simple shell command: "tar cfj /var/tmp/defrag-save.tar.bz dir ; rm -rf dir ; tar xfj /var/tmp/defrag-save.tar.bz". And again, to the extent that this works, it's because Unix filesystems are naturaly fragmentation resistant, and you are simply forcing the system to reallocate the blocks for the files in question. Is that a "fake" program? It depends on your definition of pejoratives, I suppose. It certainly isn't anything complicated, and it doesn't always work, and if there are any other programs accessing the directory when you run the procedure, you will probably cause those programs to malfunction. Understanding these issues is probably far more important than settling on whether or not a userspace defrag program is "fake" or "useless" or "thick-headed", at least in my humble opinion.
1, Traditional defragmenters are terribly slow and may be not worthing, so I've heard some Windows guys using Norton GHOST for defragmentation(yes, dumping), which could eliminate fragmentation in only several minutes with the cost of a system down-time. For the defragfs itself, one could run it when his/her system is idle.

2, Some "Allocate-On-Flush" file-systems could take the benefits of large memory, but ext3/reiserfs does not support that(that's why things like upwards would happen). XFS/reiser4 supports "Allocate-On-Flush", which could result a better disk lay-out(that's why many people found XFS a low frag-rate).

3, And I remember that you mentioned ext4 would have a kernel-state defragmenter, may be it would eliminate locking problems, but: what would it like(something like Reiser4 repacker)? would it hurt performance during work? would it be worth running?

4,
(1)
This is really important, and a hint to the user noticing him/her would do something(the user should aware what he/she is doing).

(2)&(3):
The program could report a file-system/directory's fragmentation(with the help of filefrag), and letting the user decide whether/which files needs a "dump", and then dump and report. Definitely one could do these things by his/her own, while the program automates the procedure.

The program could not eliminate fragmentation indeed, only reduce at sometimes.

I could say this program works for me because I've tested it on my system(ext3/reiserfs, please look at here: http://defragfs.sourceforge.net/sample.txt). So I think maybe somebody else would find it useful for him/her(with some cautions).

Last edited by tmcco; 05-04-2007 at 09:57 PM.
 
Old 05-04-2007, 11:29 AM   #103
tmcco
Member
 
Registered: May 2003
Location: Unknown
Distribution: Unknown
Posts: 59

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by lazlow
I really have no idea what you mean by:




I do not expect you to use your real name here, I doubt very many do (I do not). I have not asked you to post your private data here. Posting false data is very different from not posting your private data(I do not post a home page here.).

Most of the rules of the real world apply to the internet as well. If one lies and is caught one will be exposed. If one claims something is his and it is not, he will be exposed. Twisting someone else's opinions/statements to one's own agenda is generally not tolerated. When people try to "edit history" by repeatedly changing or changing the mean of what was said, one will be exposed. In short, just like in the real world, one will be held accountable for ones actions.
I've nothing to say if you insist, but I doubt how many people would agree with you that one should reveal every true information in a public forum.
 
Old 05-04-2007, 11:31 AM   #104
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
1, Before:
#filefrag 1.avi
1.avi: 48044 extents found
#time cat 1.avi>/dev/null
real 3m1.478s
user 0m0.020s
sys 0m2.086s

2, After:
#cp 1.avi 2.avi
#filefrag 2.avi
2.avi: 128 extents found
#time cat 2.avi>/dev/null
real 0m25.329s
user 0m0.012s
sys 0m1.329s
Did you do the obvious and ensure that the file wasn't cached in memory before deciding that your test was valid? If I ran two similar tests and found that the time changed from 181 seconds to 25 seconds, I would assume that my test was bad; not that what I had done was successful.

There is a reason that bonnie++ creates files that are significantly larger than system RAM.
 
Old 05-04-2007, 11:32 AM   #105
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
I've nothing to say if you insist, but I doubt how many people would agree with you that one should reveal every true information in a public forum.
You don't have to reveal anything. But, what you do reveal should not be an equivocation or a prevarication. IOW, tell the truth or don't tell anything at all.
 
  


Closed Thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
defragmentation tool czezz Linux - Software 1 02-04-2005 09:35 AM
is there any tool for cracking ext2 encrypted file system gadekar Linux - Security 1 08-18-2003 11:52 PM
HDD Defragmentation tool ? membrax Linux - Software 3 01-22-2003 04:30 AM
Defragmentation of File System mikeshn Linux - General 2 04-19-2002 09:46 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 01:08 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration