Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I don't understand, if sequential reads and writes are so much better than random on SSD, why is defrag pointless? (I know SSD have limited number of writes).
If physical sectors don't match logical, then what does 'sequential' in speed tests mean?
in short: defrag is used to collect the different parts of the same file into the same or near location (track) - if possible. That will lower the head movement. In ssd there is no moving head and access (speed to) the files is not influenced by their storage locations.
The short answer, I guess, is that sequential vs. random throughput depends on the internal parallelism (number of chips and channels), internal block and page sizes, and other details of the SSD. I haven't got it all figured out, but if you read those articles, I think you'll know more than most of us and you might find out whether defragging helps and why.
I've found that with newer kernels and large disk sizes and modern filesystems, defrag is a thing of the past. Try using 'filefrag' to check for fragmented files. All files should have 1 extent, this means fully defragmented.
Linux filesystems don't fragment files like Windows does (or did, I haven't kept up in the past few years with Windows). Linux doesn't even have a defragmentation tool, and doesn't need one.
Linux filesystems don't fragment files like Windows does (or did, I haven't kept up in the past few years with Windows). Linux doesn't even have a defragmentation tool, and doesn't need one.
Actually XFS has a sparse defrag tool. Note that originally defrag was not sparse. As jefro suggests, for FAT and NTFS defrag means moving data closer together. Well, that's not what is done or should be done with modern filesystems. What you want is sparse defrag. Keep files apart, but keep them contiguous. Again, it's not needed with newer kernels and large drives.
It is not really true that linux doesn't need to be de-fragmented. It does or can get fragmented and in server rooms one may need to check that many times a year. It would depend on a number of factors but I'll agree that most home users never would notice an issue.
It is not really true that linux doesn't need to be de-fragmented. It does or can get fragmented and in server rooms one may need to check that many times a year. It would depend on a number of factors but I'll agree that most home users never would notice an issue.
This is with newer kernels ? 3.10.x and up ? I've noticed a big change in fragmentation level with newer kernels vs older ones, so I have to ask. I know when I was using older kernels I too had to defrag and there are a number of script available to do so, but I don't use them anymore, because I don't have to.
@Beryllos
thanks for the link. I think I (somewhat) understand it now.
To achieve sequential read speeds, one probably has to write sequentialy, so that SSD knows how to efficiently place bits in parallel.
defrag programs probably write block-by-block, instead of file-by-file, so they wouldn't speed up disk.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.