Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Edit: oh wait, I just realize that CentOS does not provide xfs - which means you'll need to dump CentOS or compile your own kernel but I don't know whether those are acceptable options.
When hard drives were much smaller than they are today, they were routinely close to being "completely filled up." When files were added or expanded, it was fairly unlikely that sufficient large blocks of contiguous free space existed, anywhere, period. So, "fragmentation" was a fact-of-life, as much because of the problem of "being almost completely filled up" as anything else.
These days, space is usually plentiful. But disk and disk-controller hardware is a lot smarter, too. There is often copious amounts of unused RAM which can be devoted to file-buffers. So the practical impact of fragmentation is for the most part "gone."
Does fragmentation continue to exist? As I said earlier, yes, but it's academic: nobody's "screaming." Some filesystems and storage systems rearrange files on the media for various reasons, but the payoff of a "disk defragmenter" program has vanished.
When I was in secondary school I was taught loads of stuff in science class, mainly about electrons and atoms and orbits. Then I get to university and am told "yeah, that was kinda nonsense, just a teaching aid and not at all true, but you couldn't handle the truth" by the lecturers there. At the risk of aligning myself to a university lecturer, I'd say this is a similar thing...
Are you saying there aren't really electrons and atoms and orbits? I think even in university, that is the official story.
Unless you get into quantum mechanics, which is highly specialized.
In the very old days when I went to school on this stuff this is the answer I was told.
Backup to tape each night and restore in morning. Backup with tar to tape puts every file in order (some order but not fragmented).
Everyday your system will be both backed up and tested and defragged.
We still back up some systems to tape but the sad fact is tapes can't keep up with the amount or speed needed anymore so you have to back up to some networked or attached raid if you can do it.
Instead of backup you may be able to use other tools depending on what is going on. VM's can compact virtual disks and you can still tar over to a image and back. Might be other ways.
Cat and chunk may be able to fix some single files.
I note that the next Fedora release will dump the whole ext file system tree in favor of the btrfs (which is fully supported in the new 3.0 Linux kernels). My understanding (from reading, no experience) is that you really don't need to defragment btrfs file systems because the storage medium usage (and access) is optimized by the file system's internal structure.
Since Fedora is Red Hat's "test bed" for new things, if the Fedora experience is positive, newer Red Hat (and CentOS) releases may move to the btrfs in a year or two.
Distribution: Debian for Sparc, OpenSUSE 11.2, Solaris 9, Debian/x86, Ubuntu Server
Posts: 19
Rep:
Quote:
Originally Posted by terfy
Hello guys..
I'm wondering about this..
Everyone says that it is not necessary to defrag ext3 filesystem.. but look at this.
"420 extents found, perfection would be 37 extents"
its a 4gig file..
why is it not necessary to defrag this file ? or any other files..
I just don't get why there is no defrag tool for Linux.. I don't get it.
First of all, defragmenting a filesystem in Linux while it is mounted could be a very bad thing.
Secondly, 9 out of 10 times filesystems under Linux generally DON'T need to be defragmented. However, since you seem adamant about making this happen for you, may I suggest the following link:
Just remember that it is a good idea to have a least 10% of the filesystem you want defragmented available for the entire process to be successful. And don't forget, UNMOUNT THE AFFECTED FILESYSTEM BEFORE YOU RUN DEFRAGMENT!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.