UbuntuThis forum is for the discussion of Ubuntu Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have 1 solid state drive and was considering getting 2 more but i was wondering if anyone has any idea on how long they will last as regular hard drives.
Will they last as long as a disk drive on a normal system ?
I have read that 2tb drives have a read write limit and fter that the memory fails.
How long an SSD lasts depends on how heavily it is used for writing. Wear leveling logic helps, but how well this works depends on how much of the drive gets regularly written on. When data is written and never written over, it has to stay. So there can be LOTS of flash erase sized sections that cannot be reused resulting in the wear being narrowed to fewer blocks. I regularly (every few months) do 3 full whole drive wipe passes on my USB/CF/SD flash drives just to release any "can't erase this" sections and recirculate the pool. I don't know how much this helps even out the wear, but it seems like it could help more than the extra flash erasing would hurt.
Also it depends on if the flash drive uses MLC or SLC technology. MLC will, in theory, wear out faster than SLC. MLC can give higher density, and is more common.
Flash fabrication quality is reportedly around the level where 100,000 (or more) erase cycles is considered the median or average or typical failure point. But individual devices or sections of devices can vary widely from that figure. So how long an SSD lasts can also very widely.
Wiping the disk with writing zeroes to it or something like that actually makes things worse. Nowadays SSDs are pretty good in compensating wearout (and have less wearout in general), so if you use the usual methods (discard mount option on ext4 pr the fstrim command from time to time) you should be pretty fine.
By the way, there is no way to "even out" the wear.
Once the wear is there, of course you can't undo it. The issue I'm concerned with is the once written sections that are stranded with data even though the filesystem no longer needs it. Flash specific filesystems working at the flash chip level can erase what is unallocated. They may even be able to move smaller allocation units elsewhere to allow a larger flash block to be erased. For the cases of ordinary filesystems on top of the presentation of an ordinary drive with flash underneath, we depend on the filesystem having discard implemented, as well as moving blocks to clear flash discards implemented (seen none that do that, yet) to maximize the effect. And I've yet to seen an SSD device (I've worked with only a handful of SATA devices, just mostly USB, CF, and SD devices, so maybe some SATA devices will do discard) that actually implements the discard. But until we can successfully do discards at the allocation unit size (4K for most filesystems even though erase blocks are usually larger). We also have to remember that once a block is written with any 0 bit, it is not generally usable to write anything else until erased (but plausibly an algorithm could test for re-usable unerased blocks).
I do my wipes now with 0xFF not 0x00. That at least gives the device the opportunity to not lock out anything as having current data. This would be semi-equivalent to discarding if the device knows not to write dat ablocks that are all 0xFF. Given that writing 0xFF takes less time than writing 0x00 it would seem the device is erasing and not writing.
Some of the issues with flash discard ops may be due to poor design of tools. For example, hdparm allows no more than 65535 sectors per range. That's dumb.
BTW, what I mean by even out is to release stranded sections so that they can rejoin the pool and the more worn sections may end up stranded for a while where they can relax on vacation.
Last edited by Skaperen; 06-15-2012 at 02:25 PM.
Reason: BTW
Ext4 and and btrfs currently support the TRIM/discard feature (I use ext4 on my SSDs, since I think that btrfs is still not "production ready"). Keep in mind that the OP is asking about SSDs, not other flash devices. Almost any (means I don't know one that have not) of the modern SSDs (can we speak of a second generation here?) have also an implementation of Secure Erase, which marks all blocks on the SSD as free. This is faster (about 2 seconds on my 120GB Corsair Force 3) and doesn't cause a wear out.
When idling and they know the filesystem (sadly I haven't found any data which filesystems are supported) modern SSDs also automatically move the data on the disk around to have large blocks free for fast writing.
When I think about it, since modern drives use automatic compression it may be possible that your approach with writing 0xFF to the drive wouldn't work reliably, because uniform data can be compressed very well.
This article may, or may not, answer your worries about SSD's and their life. I have no issues using SSD drives and have had no issues in 2 years of using them.
I kind of get the feeling they will be more like usb flash drives. I have some 128M and such that are basically useless. They may be fine but they are slow and small and a new 16G is on sale for $15 so why mess with it?
The ssd's are kind of following that pace. Newer, faster and better and far cheaper come out before your old ssd fails.
As with all stuff it is MTBF and that is a big fib from the maker usually.
The ssd's are kind of following that pace. Newer, faster and better and far cheaper come out before your old ssd fails.
Exactly
I would not keep a mechanical hard drive in my system for anymore than three years as by then new drives are faster, have more onboard cache etc. I won't change that plan for my SSD's either.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.