Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
1, My views are based on "Numbers/Charts/Scripts" which anybody could verify/reproduce.
Your numbers are based on voodoo and shaking a stick in a particular fashion. The bottom line is this: if we have to *look* for the problem, or manipulate filesystems in a specific way to *see* a problem, then it Simply Doesn't Exist in the real world. Those motorcycles aren't blowing up from using Quaker State/Valvoline/Havoline instead of Mobil One, and Linux filesystems aren't slowing to a crawl due to fragmentation. To quote a famous commercial "Where's The Beef?".
The quickest way to get my attention would be to cite real problems posted by real admins looking for help with fragmented filesystems.
Last edited by Quakeboy02; 04-21-2007 at 10:46 PM.
As to #1: I was typing while you posted(note time stamp). I think you mostly danced around the issues.
#2. Then why hasn't anybody verified it?
Again, self referencing is BAD.
You certainly dance well.
1, Pardon? Danced around what issues?
2, Well, after carefully experiment, I believe "Linux file-system does fragment and it degrades performance greatly", so I published my views/numbers/charts/scripts/samples. While many other people(maybe including you, and I used to) believe "Linux file-system does not fragment and the performance degrade could be ignored", they just ignore any disadvantages, I hope you'd not.
3, Links to some paper is added to http://defragfs.sourceforge.net/theory.html , although Unix/BSD/FFS seems something outdated, my pages and some of my posts on this thread pointed to Linux file-systems.
real world disk access is inherently fragmented and not sequential so file fragmentation is a non issue.
And you still haven't stopped self-referencing. Provide links to some real world problems caused by file fragmentation under Linux and you might start persuading people. You're not doing too well at the moment.
It explains some of the reasons why file fragmentation under Linux doesn't matter. This part is particularly relevant:
"Now, this is not to say that 'file fragmentation' is a good thing. It's just that 'file fragmentation' doesn't have the *impact* here that it would have in MSDOS-based systems. The performance difference between a 'file fragmented' Linux file system and a 'file unfragmented' Linux file system is minimal to none, where the same performance difference under MSDOS would be huge.
Under the right circumstances, fragmentation is a neutral thing, neither bad nor good. As to defraging a Linux filesystem (ext2fs), there are tools available, but (because of the design of the system) these tools are rarely (if ever) needed or used. That's the impact of designing up front the multi-processing/multi-tasking multi-user capacity of the OS into it's facilities, rather than tacking multi-processing/multi-tasking multi-user support on to an inherently single-processing/single-tasking single-user system."
Emphasis added.
Can you rebut this argument with solid proof and not just another link to your own website?
And you still haven't stopped self-referencing. Provide links to some real world problems caused by file fragmentation under Linux and you might start persuading people. You're not doing too well at the moment.
It explains some of the reasons why file fragmentation under Linux doesn't matter. This part is particularly relevant:
"Now, this is not to say that 'file fragmentation' is a good thing. It's just that 'file fragmentation' doesn't have the *impact* here that it would have in MSDOS-based systems. The performance difference between a 'file fragmented' Linux file system and a 'file unfragmented' Linux file system is minimal to none, where the same performance difference under MSDOS would be huge.
Under the right circumstances, fragmentation is a neutral thing, neither bad nor good. As to defraging a Linux filesystem (ext2fs), there are tools available, but (because of the design of the system) these tools are rarely (if ever) needed or used. That's the impact of designing up front the multi-processing/multi-tasking multi-user capacity of the OS into it's facilities, rather than tacking multi-processing/multi-tasking multi-user support on to an inherently single-processing/single-tasking single-user system."
Emphasis added.
Can you rebut this argument with solid proof and not just another link to your own website?
2, You missed that I post "research_reports/numbers/charts/scripts", most of you just post "I_think/I_believe/In_my_experience/I_feel/I_heard/".
3, You missed my earlier posts: I do agree most disk access patterns are random-read, while which is also heavily affected by fragmentation: in my tests, the partition is only 40%-50% full, after 20 times of aging, the random-read performance degraded to:
ext2/3: 60%
JFS: 40%
XFS: 40%
Reiser3: 50%
Reiser4: 70% http://defragfs.sourceforge.net/theory2.html
???WHY???
You could not understand it because you missed my post #43, but here is some "hint" from post #43:
4, PLEASE! I found many of you just asking the same question I've answered before, so I hope everyone read my original posts carefully before making any subjective judgment.
i resisted this thread at first because it felt like a troll but since it's still alive here i go.
the problem with your tests is as i see it
OK yea sequential reads get slower DUHH ! When you artificially cause fragmentation on a nearly full disk partition like your test does. but why do we care ? (its an atrifical test designed do create an artificial effect im afraid to the end of promoting the defrag program. when in real world do we do sequential reads like that ?(never). its a totally false benchmark. The giveaway is the way the page theory.html starts out talking about "lies". trying to appeal to emotional responses like advertising does. actual theory would never begin by tanking about "lies, misunderstandings around the world" thats not a theory thats an add slogan and the two things are as far apart as any two thing can be.
real world disk access is inherently fragmented and not sequential so file fragmentation is a non issue.
You've missed one important facet of this arguement...NO ONE has noted significant performance loss on their Linux systems after 6 months or more...
Now then, I'm sure if everyone ran your defrag script on their machines, a performance hit would be seen in numbers, but until that performance hit starts interfering with day-to-day usage, I doubt anyone will take this seriously.
Sounds to me like you figured out a way to artificially fragment a system and now you want us all to use your script to fix it? How about this: run the same test on a system that's aged through actual use...you up for that?
That's the kind of "proof" I think people here would be more "open minded" to.
run the same test on a system that's aged through actual use...you up for that?
That's the kind of "proof" I think people here would be more "open minded" to.
YES i think alot of what you are showing us is cached vs un-cached memory.
so no artificially manipulating and swapping the files so disk cache needs to be repalced and then
show us some performance problems.
and yes i read the strange add page as best i could but i frankly can't understand it because it seems to look like nonsense. looks like you are saying all read and writes randome and
sequential for all filesystem start out 100% equal at what you lable 100%. were these newly formatted disks ? and i just don't believe throughput was equal for both reads and writes and then sudenly began to change.
we need to see raw benchmarking data not charts.
PLEASE! I found many of you just asking the same question I've nswered before
You aren't answering them. You're saying, "these were MY findings"... If you can provide ANY evidence from ANY SOURCE OUTSIDE your own website which shows findings even VAGUELY SIMILAR to yours, your argument will be more credible.
Quote:
Originally Posted by tmcco
after 20 times of aging, the random-read performance degraded to:
...
XFS: 40%
Reiser3: 50%
You make these ridiculous claims, without considering the fact that even a novice computer user would notice a 60% performance hit.
Going by your numbers, my computer should have slowed to a crawl by now. It hasn't. Explain that.
Quote:
Originally Posted by rocket357
How about this: run the same test on a system that's aged through actual use...
I agree. Make it a file server which has lots of throughput.
You aren't answering them. You're saying, "these were MY findings"... If you can provide ANY evidence from ANY SOURCE OUTSIDE your own website which shows findings even VAGUELY SIMILAR to yours, your argument will be more credible.
You make these ridiculous claims, without considering the fact that even a novice computer user would notice a 60% performance hit.
Going by your numbers, my computer should have slowed to a crawl by now. It hasn't. Explain that.
I agree. Make it a file server which has lots of throughput.
Yes, this is another perfect example who missed everything of my posts.
1, You missed post #51 which I said those references by other people.
2, You missed that I mentioned the performance degraded to xx% "after 20 times of aging", ridiculous you were.
Lets assume that it is tmcco speaking english as a second language and just let it go. He apparently cannot comprehend what we have said. IF he is actually saying something that we have missed (I do not think this is the case) then he is unable to express himself in a manner that we can comprehend.
I looked. There are no references there. Just YOUR words and YOUR charts. We have told you time and time again that you cannot be your own authority. This is a basic principle of philosophy. You must give us some credible examples of well-known admins who are complaining about filesystem fragmentation if you expect to have any hope of being taken seriously. At this point, you aren't even bothering to make an effort. I think you should google "internet crackpot".
hi,
as i and many guys said earlier test suite is the first problem.bcaz when u want people to accept ur concept ,first in test suite must be capable to find how much fragmentation is there in our aged file system.
bcaz, we need to know our current status first.i think, u will accept.
Then the test suite must show the advantage of using the program.
Thats not a perfect test suite bcaz it is not ready accept our test cases.
Regarding benchmarks,
Quote:
Originally Posted by studioj
we need to see raw benchmarking data not charts.
I looked. There are no references there. Just YOUR words and YOUR charts. We have told you time and time again that you cannot be your own authority. This is a basic principle of philosophy. You must give us some credible examples of well-known admins who are complaining about filesystem fragmentation if you expect to have any hope of being taken seriously. At this point, you aren't even bothering to make an effort. I think you should google "internet crackpot".
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.