Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
hi,
there is no need to defrag as we use extents.It simply allocates standard size for each file.So when new file is created , it is done on a new extent and not in the remaining space of other file's extent.when the file is modified or appended it is done on the same or its last extent.So no need to defrag
hi,
there is no need to defrag as we use extents.It simply allocates standard size for each file.So when new file is created , it is done on a new extent and not in the remaining space of other file's extent.when the file is modified or appended it is done on the same or its last extent.So no need to defrag
regards,
Nirmal Tom.
May be you are incorrect.
1, ext2/3 does not use extent on space allocation, ext4dev may add this on the future.
2, Even if you are using a extent-based file system(JFS/XFS/Reiser3,4), have you ever thought on the "Create/Delete/Create..." situation? Something like:
. . . . . . . . . . (10free blocks, each 4KB)
1 1 2 2 3 3 3 4 5 . (you've 5 files allocated "extent", left 1 block)
1 1 . . 3 3 3 . 5 . (delete 2 and 4)
1 1 6 6 3 3 3 6 5 6 (a new file "6" allocates 4 blocks, which does not continuously(3 frags)
3, You may argue that in the upper example, if you've more free space, the file system could allocate the "free-and-continuous" space, which is false. please read here: http://defragfs.sourceforge.net/theory.html
hi,
nice infn, ya i thought of reiser(ext4 contains it).Even lvm2 uses extents.since extents reduce the amount of fragmentation, i generally and many dont care about it.i will try it and see performance difference.i think it contain some perl scripts for that too.
tmcco,
it's a nice idea, to have a user space fs defrag tool, but I personally don't see a need for it.
1) with *x filesystems being nonfragmenting, there isn't a need to regularly defragment a filesystem.
crashes that cause a hard power off and on to reboot, or power failure are just about the only cause of filesystem fragmentation on *x systems.
2) it has always been recommended to completely rebuild your filesystem every 6 months, so that the .6% fragmentation that may occur in that time gets cleaned up when following recommended procedure.
I know, people just coming to *x from MS' offerings will not know the recommended procedure, they will see a need for and want it, but they will also be rebuilding their system at least monthly until they learn more about *x, so they don't need it either. When they stop rebilding it that often, they will also know it is BEST to actually rebuild the system every six months, problems arising from failure to do so will be their own fault.
[ problems not really likely, I have had systems running various flavours of *x for three years without a single update or issue. ]
tmcco,
it's a nice idea, to have a user space fs defrag tool, but I personally don't see a need for it.
1) with *x filesystems being nonfragmenting, there isn't a need to regularly defragment a filesystem.
crashes that cause a hard power off and on to reboot, or power failure are just about the only cause of filesystem fragmentation on *x systems.
2) it has always been recommended to completely rebuild your filesystem every 6 months, so that the .6% fragmentation that may occur in that time gets cleaned up when following recommended procedure.
I know, people just coming to *x from MS' offerings will not know the recommended procedure, they will see a need for and want it, but they will also be rebuilding their system at least monthly until they learn more about *x, so they don't need it either. When they stop rebilding it that often, they will also know it is BEST to actually rebuild the system every six months, problems arising from failure to do so will be their own fault.
[ problems not really likely, I have had systems running various flavours of *x for three years without a single update or issue. ]
1, You said "*x filesystems being nonfragmenting", which I think no one would believe, just look at the example I mentioned before.
2, How do you measure the ".6% fragmentation"? By "guessing"? How could you assure that every *x filesystems would only got ".6% fragmentation" in 6 months?
Although I do not know anything about the filesystems themselves I must admit I have had no need to defrag any of my filesystems, even after months of hard use (and abuse), some of my machines haven't had a reformat for atleast 2 or 3 years, they still seem to perform the same as they did when they were first installed.
It's often stated that *nix file systems do not fragment the way that Windows file systems do. For some reason people take this to mean the *nix file systems don't fragment at all.. In truth they do just to a lesser extent as the file system does a much better job preventing fragmentation.
The ext2 and ext3 file systems most often used on Linux systems also attempt to keep fragmentation at a minimum. These file systems keep all blocks in a file close together. How they do this is by preallocating disk data blocks to regular files before they are actually used. Because of this, when a file increases in size, several adjacent blocks are already reserved, reducing file fragmentation. It is, therefore, seldom necessary to analyze the amount of fragmentation on a Linux system, never mind actually run a defragment command. An exception exists for files that are constantly appended to as the reserved blocks will only last so long.
Although I do not know anything about the filesystems themselves I must admit I have had no need to defrag any of my filesystems, even after months of hard use (and abuse), some of my machines haven't had a reformat for atleast 2 or 3 years, they still seem to perform the same as they did when they were first installed.
"they still SEEM to perform the same as they did when they were first installed"
You may need some data to make your thoughts more persuasively. For example, the output of defragfs.
tmcco,
try running e2fsk* on your e2fs, or reboot 20 to 30 times, your distro will run it automatically, the messages output will tell you how fragmented the particular filesystem is.
*e2fsk is for the ex2fs, ex3fs has a different call to implement it.
just as every other filesystem available for use in any *x can have the filesystem checked and fragmentation amount reported.
the .6% fragmented is the amount my heavily used home partition has shown regularly, throughout the time I have been using linux, even when I did not rebuild the filesystem for 3 years. My minimally altered / partition generally shows a 0.01% fragmentation after a 6 month period, which is the exact same fragmentation level it showed right after the filesytem was created.
tmcco,
try running e2fsk* on your e2fs, or reboot 20 to 30 times, your distro will run it automatically, the messages output will tell you how fragmented the particular filesystem is.
*e2fsk is for the ex2fs, ex3fs has a different call to implement it.
just as every other filesystem available for use in any *x can have the filesystem checked and fragmentation amount reported.
the .6% fragmented is the amount my heavily used home partition has shown regularly, throughout the time I have been using linux, even when I did not rebuild the filesystem for 3 years. My minimally altered / partition generally shows a 0.01% fragmentation after a 6 month period, which is the exact same fragmentation level it showed right after the filesytem was created.
Good.
Could you please run "defragfs" on your fs and paste the result?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.