LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Ubuntu
User Name
Password
Ubuntu This forum is for the discussion of Ubuntu Linux.

Notices


Reply
  Search this Thread
Old 06-15-2012, 11:36 AM   #1
jmore9
LQ Newbie
 
Registered: Jul 2005
Posts: 23

Rep: Reputation: 1
Solid state drives


Hello !

I have 1 solid state drive and was considering getting 2 more but i was wondering if anyone has any idea on how long they will last as regular hard drives.

Will they last as long as a disk drive on a normal system ?

I have read that 2tb drives have a read write limit and fter that the memory fails.

Thanks for your time in advance
 
Old 06-15-2012, 12:48 PM   #2
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,684
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
How long an SSD lasts depends on how heavily it is used for writing. Wear leveling logic helps, but how well this works depends on how much of the drive gets regularly written on. When data is written and never written over, it has to stay. So there can be LOTS of flash erase sized sections that cannot be reused resulting in the wear being narrowed to fewer blocks. I regularly (every few months) do 3 full whole drive wipe passes on my USB/CF/SD flash drives just to release any "can't erase this" sections and recirculate the pool. I don't know how much this helps even out the wear, but it seems like it could help more than the extra flash erasing would hurt.

Also it depends on if the flash drive uses MLC or SLC technology. MLC will, in theory, wear out faster than SLC. MLC can give higher density, and is more common.

Flash fabrication quality is reportedly around the level where 100,000 (or more) erase cycles is considered the median or average or typical failure point. But individual devices or sections of devices can vary widely from that figure. So how long an SSD lasts can also very widely.
 
Old 06-15-2012, 01:59 PM   #3
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Wiping the disk with writing zeroes to it or something like that actually makes things worse. Nowadays SSDs are pretty good in compensating wearout (and have less wearout in general), so if you use the usual methods (discard mount option on ext4 pr the fstrim command from time to time) you should be pretty fine.
By the way, there is no way to "even out" the wear.
 
Old 06-15-2012, 02:22 PM   #4
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,684
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Once the wear is there, of course you can't undo it. The issue I'm concerned with is the once written sections that are stranded with data even though the filesystem no longer needs it. Flash specific filesystems working at the flash chip level can erase what is unallocated. They may even be able to move smaller allocation units elsewhere to allow a larger flash block to be erased. For the cases of ordinary filesystems on top of the presentation of an ordinary drive with flash underneath, we depend on the filesystem having discard implemented, as well as moving blocks to clear flash discards implemented (seen none that do that, yet) to maximize the effect. And I've yet to seen an SSD device (I've worked with only a handful of SATA devices, just mostly USB, CF, and SD devices, so maybe some SATA devices will do discard) that actually implements the discard. But until we can successfully do discards at the allocation unit size (4K for most filesystems even though erase blocks are usually larger). We also have to remember that once a block is written with any 0 bit, it is not generally usable to write anything else until erased (but plausibly an algorithm could test for re-usable unerased blocks).

I do my wipes now with 0xFF not 0x00. That at least gives the device the opportunity to not lock out anything as having current data. This would be semi-equivalent to discarding if the device knows not to write dat ablocks that are all 0xFF. Given that writing 0xFF takes less time than writing 0x00 it would seem the device is erasing and not writing.

Some of the issues with flash discard ops may be due to poor design of tools. For example, hdparm allows no more than 65535 sectors per range. That's dumb.

Code:
orcus/root/x1 /root 522# time hdparm --please-destroy-my-drive --trim-sector-ranges 0:32768 /dev/sdg

/dev/sdg:
trimming 32768 sectors from 1 ranges
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
succeeded
[[ 0m0s real 0.161 - user 0.000 - sys 0.126 - 78.07% ]]
orcus/root/x1 /root 523# dd if=/dev/sdg iflag=direct bs=4096 count=1 2>/dev/null | xd16
00000000  fa31c08e d88ed0bc 007c89e6 06578ec0 |.1.......|...W..|
00000010  fbfcbf00 06b90001 f3a5ea1f 06000052 |...............R|
00000020  52b441bb aa5531c9 30f6f9cd 13721381 |R.A..U1.0....r..|
00000030  fb55aa75 0dd1e973 0966c706 8d06b442 |.U.u...s.f.....B|
00000040  eb155ab4 08cd1383 e13f510f b6c640f7 |..Z......?Q...@.|
00000050  e1525066 31c06699 e86600e8 21014d69 |.RPf1.f..f..!.Mi|
00000060  7373696e 67206f70 65726174 696e6720 |ssing operating |
00000070  73797374 656d2e0d 0a666066 31d2bb00 |system...f`f1...|
00000080  7c665266 5006536a 016a1089 e666f736 ||fRfP.Sj.j...f.6|
00000090  f47bc0e4 0688e188 c592f636 f87b88c6 |.{.........6.{..|
000000a0  08e141b8 01028a16 fa7bcd13 8d641066 |..A......{...d.f|
000000b0  61c3e8c4 ffbebe7d bfbe07b9 2000f3a5 |a......}.... ...|
000000c0  c3666089 e5bbbe07 b9040031 c05351f6 |.f`........1.SQ.|
000000d0  07807403 4089de83 c310e2f3 48745b79 |..t.@.......Ht[y|
000000e0  39595b8a 47043c0f 7406247f 3c057522 |9Y[.G.<.t.$.<.u"|
000000f0  668b4708 668b5614 6601d066 21d27503 |f.G.f.V.f..f!.u.|
00000100  6689c2e8 acff7203 e8b6ff66 8b461ce8 |f.....r....f.F..|
00000110  a0ff83c3 10e2cc66 61c3e862 004d756c |.......fa..b.Mul|
00000120  7469706c 65206163 74697665 20706172 |tiple active par|
00000130  74697469 6f6e732e 0d0a668b 44086603 |titions...f.D.f.|
00000140  461c6689 4408e830 ff721381 3efe7d55 |F.f.D..0.r..>.}U|
00000150  aa0f8506 ffbcfa7b 5a5f07fa ffe4e81e |.......{Z_......|
00000160  004f7065 72617469 6e672073 79737465 |.Operating syste|
00000170  6d206c6f 61642065 72726f72 2e0d0a5e |m load error...^|
00000180  acb40e8a 3e6204b3 07cd103c 0a75f1cd |....>b.....<.u..|
00000190  18f4ebfd 00000000 00000000 00000000 |................|
000001a0  00000000 00000000 00000000 00000000 |................|
000001b0  00000000 00000000 ff5c0500 00008000 |.........\......|
000001c0  01208303 e0bf0010 000000d0 01000000 |. ..............|
000001d0  c1c08303 e0ff00e0 01000020 74000000 |........... t...|
000001e0  00000000 00000000 00000000 00000000 |................|
000001f0  00000000 00000000 00000000 000055aa |..............U.|
00000200  00000000 00000000 00000000 00000000 |................|
*
orcus/root/x1 /root 524#
BTW, what I mean by even out is to release stranded sections so that they can rejoin the pool and the more worn sections may end up stranded for a while where they can relax on vacation.

Last edited by Skaperen; 06-15-2012 at 02:25 PM. Reason: BTW
 
Old 06-15-2012, 04:14 PM   #5
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Ext4 and and btrfs currently support the TRIM/discard feature (I use ext4 on my SSDs, since I think that btrfs is still not "production ready"). Keep in mind that the OP is asking about SSDs, not other flash devices. Almost any (means I don't know one that have not) of the modern SSDs (can we speak of a second generation here?) have also an implementation of Secure Erase, which marks all blocks on the SSD as free. This is faster (about 2 seconds on my 120GB Corsair Force 3) and doesn't cause a wear out.
When idling and they know the filesystem (sadly I haven't found any data which filesystems are supported) modern SSDs also automatically move the data on the disk around to have large blocks free for fast writing.
When I think about it, since modern drives use automatic compression it may be possible that your approach with writing 0xFF to the drive wouldn't work reliably, because uniform data can be compressed very well.
 
Old 06-15-2012, 07:50 PM   #6
nixblog
Member
 
Registered: May 2012
Posts: 426

Rep: Reputation: 53
This article may, or may not, answer your worries about SSD's and their life. I have no issues using SSD drives and have had no issues in 2 years of using them.
 
Old 06-15-2012, 07:50 PM   #7
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,981

Rep: Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625
I kind of get the feeling they will be more like usb flash drives. I have some 128M and such that are basically useless. They may be fine but they are slow and small and a new 16G is on sale for $15 so why mess with it?

The ssd's are kind of following that pace. Newer, faster and better and far cheaper come out before your old ssd fails.

As with all stuff it is MTBF and that is a big fib from the maker usually.

I sure liked this way of making ram based memory. Shame it never caught on. http://www.google.com/products/catal...ed=0CGIQxBUwAA

Last edited by jefro; 06-15-2012 at 07:54 PM.
 
Old 06-15-2012, 08:12 PM   #8
nixblog
Member
 
Registered: May 2012
Posts: 426

Rep: Reputation: 53
Quote:
Originally Posted by jefro View Post
The ssd's are kind of following that pace. Newer, faster and better and far cheaper come out before your old ssd fails.
Exactly

I would not keep a mechanical hard drive in my system for anymore than three years as by then new drives are faster, have more onboard cache etc. I won't change that plan for my SSD's either.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Solid State Drives carlosinfl Linux - Hardware 19 12-12-2010 01:44 PM
Solid state drives are AWESOME. manwichmakesameal General 16 05-30-2010 05:40 PM
Are solid state drives any good? abefroman Linux - Hardware 1 08-13-2009 03:07 PM
Solid State Hard Drives fatman General 3 01-16-2007 07:06 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Ubuntu

All times are GMT -5. The time now is 07:00 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration