Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
11-06-2014, 02:17 AM
|
#1
|
Senior Member
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,068
Rep:
|
SSD defrag
I don't understand, if sequential reads and writes are so much better than random on SSD, why is defrag pointless? (I know SSD have limited number of writes).
If physical sectors don't match logical, then what does 'sequential' in speed tests mean?
|
|
|
11-06-2014, 04:24 AM
|
#2
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,192
|
in short: defrag is used to collect the different parts of the same file into the same or near location (track) - if possible. That will lower the head movement. In ssd there is no moving head and access (speed to) the files is not influenced by their storage locations.
|
|
1 members found this post helpful.
|
11-06-2014, 05:46 AM
|
#3
|
Senior Member
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,068
Original Poster
Rep:
|
why are then sequential speeds higher in benchmarks?
|
|
|
11-06-2014, 06:01 AM
|
#4
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,192
|
can you show an example?
|
|
|
11-06-2014, 06:28 AM
|
#5
|
Senior Member
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,068
Original Poster
Rep:
|
take any benchmark, for eg: http://www.anandtech.com/bench/product/966
128KB Sequential Read (4K Aligned) is 429,8 Mb/s
4KB Random Read (4K Aligned) is 91,7 Mb/s
|
|
|
11-06-2014, 06:40 AM
|
#6
|
LQ Guru
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS, Manjaro
Posts: 6,163
|
Sequencial
I think you misunderstand the testing methodology and terms. Those result factors have little to do with fragmentation.
(As in, some but not much.)
|
|
|
11-06-2014, 09:18 AM
|
#7
|
Senior Member
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,068
Original Poster
Rep:
|
well what am I missing? if we have defragmented file, it would be read sequentially; badly fragmented one would be read 'randomly', is it not so?
|
|
|
11-06-2014, 11:03 AM
|
#8
|
Member
Registered: Apr 2013
Location: Massachusetts
Distribution: Debian
Posts: 529
|
I googled "ssd random vs sequential read" and found this interesting series of articles on SSD: from codeCapsule.com (blog): Coding for SSDs There is a lot there, but scroll down to the Table of Contents and click on (or click here) Part 5: Access Patterns and System Optimizations and look it over.
The short answer, I guess, is that sequential vs. random throughput depends on the internal parallelism (number of chips and channels), internal block and page sizes, and other details of the SSD. I haven't got it all figured out, but if you read those articles, I think you'll know more than most of us and you might find out whether defragging helps and why.
Last edited by Beryllos; 11-06-2014 at 11:06 AM.
|
|
1 members found this post helpful.
|
11-06-2014, 12:03 PM
|
#9
|
Senior Member
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982
|
I've found that with newer kernels and large disk sizes and modern filesystems, defrag is a thing of the past. Try using 'filefrag' to check for fragmented files. All files should have 1 extent, this means fully defragmented.
|
|
1 members found this post helpful.
|
11-06-2014, 03:55 PM
|
#10
|
Moderator
Registered: Mar 2008
Posts: 22,361
|
A mechanical hard drive places stuff in different areas. The reason you defrag is to get data as close to each other.
A SSD has no such need to be close. It takes the same time to access a block on one part of memory as it does the other.
Also they do house keeping internally so no real need unless you are trying to salvage a few bits here and there.
|
|
|
11-06-2014, 04:14 PM
|
#11
|
Senior Member
Registered: Jan 2008
Location: Baja Oklahoma
Distribution: Debian Stable and Unstable
Posts: 1,964
|
Linux filesystems don't fragment files like Windows does (or did, I haven't kept up in the past few years with Windows). Linux doesn't even have a defragmentation tool, and doesn't need one.
|
|
|
11-06-2014, 05:16 PM
|
#12
|
Senior Member
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982
|
Quote:
Originally Posted by sgosnell
Linux filesystems don't fragment files like Windows does (or did, I haven't kept up in the past few years with Windows). Linux doesn't even have a defragmentation tool, and doesn't need one.
|
Actually XFS has a sparse defrag tool. Note that originally defrag was not sparse. As jefro suggests, for FAT and NTFS defrag means moving data closer together. Well, that's not what is done or should be done with modern filesystems. What you want is sparse defrag. Keep files apart, but keep them contiguous. Again, it's not needed with newer kernels and large drives.
|
|
1 members found this post helpful.
|
11-06-2014, 07:46 PM
|
#13
|
Moderator
Registered: Mar 2008
Posts: 22,361
|
It is not really true that linux doesn't need to be de-fragmented. It does or can get fragmented and in server rooms one may need to check that many times a year. It would depend on a number of factors but I'll agree that most home users never would notice an issue.
|
|
2 members found this post helpful.
|
11-06-2014, 07:59 PM
|
#14
|
Senior Member
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982
|
Quote:
Originally Posted by jefro
It is not really true that linux doesn't need to be de-fragmented. It does or can get fragmented and in server rooms one may need to check that many times a year. It would depend on a number of factors but I'll agree that most home users never would notice an issue.
|
This is with newer kernels ? 3.10.x and up ? I've noticed a big change in fragmentation level with newer kernels vs older ones, so I have to ask. I know when I was using older kernels I too had to defrag and there are a number of script available to do so, but I don't use them anymore, because I don't have to.
|
|
1 members found this post helpful.
|
11-07-2014, 07:30 AM
|
#15
|
Senior Member
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,068
Original Poster
Rep:
|
@Beryllos
thanks for the link. I think I (somewhat) understand it now.
To achieve sequential read speeds, one probably has to write sequentialy, so that SSD knows how to efficiently place bits in parallel.
defrag programs probably write block-by-block, instead of file-by-file, so they wouldn't speed up disk.
Last edited by qrange; 11-07-2014 at 07:31 AM.
|
|
|
All times are GMT -5. The time now is 08:44 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|