LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 11-06-2014, 02:17 AM   #1
qrange
Senior Member
 
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,061

Rep: Reputation: 47
SSD defrag


I don't understand, if sequential reads and writes are so much better than random on SSD, why is defrag pointless? (I know SSD have limited number of writes).
If physical sectors don't match logical, then what does 'sequential' in speed tests mean?
 
Old 11-06-2014, 04:24 AM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,836

Rep: Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308
in short: defrag is used to collect the different parts of the same file into the same or near location (track) - if possible. That will lower the head movement. In ssd there is no moving head and access (speed to) the files is not influenced by their storage locations.
 
1 members found this post helpful.
Old 11-06-2014, 05:46 AM   #3
qrange
Senior Member
 
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,061

Original Poster
Rep: Reputation: 47
why are then sequential speeds higher in benchmarks?
 
Old 11-06-2014, 06:01 AM   #4
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,836

Rep: Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308
can you show an example?
 
Old 11-06-2014, 06:28 AM   #5
qrange
Senior Member
 
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,061

Original Poster
Rep: Reputation: 47
take any benchmark, for eg: http://www.anandtech.com/bench/product/966

128KB Sequential Read (4K Aligned) is 429,8 Mb/s
4KB Random Read (4K Aligned) is 91,7 Mb/s
 
Old 11-06-2014, 06:40 AM   #6
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,617

Rep: Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695
Sequencial

I think you misunderstand the testing methodology and terms. Those result factors have little to do with fragmentation.
(As in, some but not much.)
 
Old 11-06-2014, 09:18 AM   #7
qrange
Senior Member
 
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,061

Original Poster
Rep: Reputation: 47
well what am I missing? if we have defragmented file, it would be read sequentially; badly fragmented one would be read 'randomly', is it not so?
 
Old 11-06-2014, 11:03 AM   #8
Beryllos
Member
 
Registered: Apr 2013
Location: Massachusetts
Distribution: Debian
Posts: 529

Rep: Reputation: 319Reputation: 319Reputation: 319Reputation: 319
I googled "ssd random vs sequential read" and found this interesting series of articles on SSD:
from codeCapsule.com (blog): Coding for SSDs
There is a lot there, but scroll down to the Table of Contents and click on (or click here) Part 5: Access Patterns and System Optimizations and look it over.

The short answer, I guess, is that sequential vs. random throughput depends on the internal parallelism (number of chips and channels), internal block and page sizes, and other details of the SSD. I haven't got it all figured out, but if you read those articles, I think you'll know more than most of us and you might find out whether defragging helps and why.

Last edited by Beryllos; 11-06-2014 at 11:06 AM.
 
1 members found this post helpful.
Old 11-06-2014, 12:03 PM   #9
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
I've found that with newer kernels and large disk sizes and modern filesystems, defrag is a thing of the past. Try using 'filefrag' to check for fragmented files. All files should have 1 extent, this means fully defragmented.
 
1 members found this post helpful.
Old 11-06-2014, 03:55 PM   #10
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,978

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
A mechanical hard drive places stuff in different areas. The reason you defrag is to get data as close to each other.

A SSD has no such need to be close. It takes the same time to access a block on one part of memory as it does the other.

Also they do house keeping internally so no real need unless you are trying to salvage a few bits here and there.
 
Old 11-06-2014, 04:14 PM   #11
sgosnell
Senior Member
 
Registered: Jan 2008
Location: Baja Oklahoma
Distribution: Debian Stable and Unstable
Posts: 1,943

Rep: Reputation: 542Reputation: 542Reputation: 542Reputation: 542Reputation: 542Reputation: 542
Linux filesystems don't fragment files like Windows does (or did, I haven't kept up in the past few years with Windows). Linux doesn't even have a defragmentation tool, and doesn't need one.
 
Old 11-06-2014, 05:16 PM   #12
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
Originally Posted by sgosnell View Post
Linux filesystems don't fragment files like Windows does (or did, I haven't kept up in the past few years with Windows). Linux doesn't even have a defragmentation tool, and doesn't need one.
Actually XFS has a sparse defrag tool. Note that originally defrag was not sparse. As jefro suggests, for FAT and NTFS defrag means moving data closer together. Well, that's not what is done or should be done with modern filesystems. What you want is sparse defrag. Keep files apart, but keep them contiguous. Again, it's not needed with newer kernels and large drives.
 
1 members found this post helpful.
Old 11-06-2014, 07:46 PM   #13
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,978

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
It is not really true that linux doesn't need to be de-fragmented. It does or can get fragmented and in server rooms one may need to check that many times a year. It would depend on a number of factors but I'll agree that most home users never would notice an issue.
 
2 members found this post helpful.
Old 11-06-2014, 07:59 PM   #14
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
Originally Posted by jefro View Post
It is not really true that linux doesn't need to be de-fragmented. It does or can get fragmented and in server rooms one may need to check that many times a year. It would depend on a number of factors but I'll agree that most home users never would notice an issue.
This is with newer kernels ? 3.10.x and up ? I've noticed a big change in fragmentation level with newer kernels vs older ones, so I have to ask. I know when I was using older kernels I too had to defrag and there are a number of script available to do so, but I don't use them anymore, because I don't have to.
 
1 members found this post helpful.
Old 11-07-2014, 07:30 AM   #15
qrange
Senior Member
 
Registered: Jul 2006
Location: Belgrade, Yugoslavia
Distribution: Debian stable/testing, amd64
Posts: 1,061

Original Poster
Rep: Reputation: 47
@Beryllos
thanks for the link. I think I (somewhat) understand it now.
To achieve sequential read speeds, one probably has to write sequentialy, so that SSD knows how to efficiently place bits in parallel.
defrag programs probably write block-by-block, instead of file-by-file, so they wouldn't speed up disk.

Last edited by qrange; 11-07-2014 at 07:31 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
SSD raid1 vs SSD raid10 advice ? wonker Linux - Hardware 8 05-23-2012 01:46 AM
Is it necessary to defrag? simsjr Linux - Newbie 7 04-27-2004 02:33 AM
How to defrag arsham Linux - General 2 04-01-2004 12:05 AM
Defrag dunmarie Linux - General 4 10-27-2003 02:42 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 04:58 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration