Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi, I'm doing a new computer build and thinking about an ssd for the system, slackware 64 latest stable.
I've read up and see what to do about system tweaks like trim.
What I would like to know if you are actively using an ssd, what size, about what you paid, how long has it been installed, have you done any system tweaks, which distro/kernel, how is the system performance, is it what you thought was, would you suggest anything else.
I use an Intel X25-V 40GB SATA 2 SSD in my laptop with Slackware 13.37 (will go back to -current when the AMD legacy driver is released). I use ext4 on it with the usual tweaks (proper partition aligning, trim from time to time using fstrim, using nop-scheduler), it works fine since about 2.5 years in that machine, price was about 120€ back in the time.
I also use a Corsair Force 3 120GB SATA 3 in my main machine with Vista/Slackware64 -current dual boot, also ext4 with the same optimizations. This peace of hardware is about 6 months old and works like a charm.
newegg has been having some super deals on SSD's, I've seen some 120GB's go for $70 multiple times over the past month, these are different SSD's and lots of deals all the time.
most of the consumer MLC SSD's it looks like can take 5GB of writes per day for around 5 years before it dies, so that's what you want to be aware of.. which it shouldn't be an issue at all but this is what testing & intel has said.
keep an eye out for those deals though, save you a good chunk
Yea I'm running on the 90GB version (had it for a half year or so) runs very well, w7 boots up in a few seconds (not counting bios) and linux on it doesn't even seem to boot, it just pops up with login screen (lol).. and of course bios takes longer than anything
most of the consumer MLC SSD's it looks like can take 5GB of writes per day for around 5 years before it dies, so that's what you want to be aware of.. which it shouldn't be an issue at all but this is what testing & intel has said.
keep an eye out for those deals though, save you a good chunk
It would be nice if you shared where you got the 5GB per day writes and a time frame for 5 years. Because Intel provides better numbers than what you posted;
If you don't want to crunch through the math, Intel estimates that the 80GB X25-M will last for five years with "much greater than" 100GB of write-erase per day. That's a relatively long time for much more data than most folks are likely to write or erase on a daily basis.
Actual drive lifespans aside, Intel rates the X25-M's Mean Time Between Failures (MTBF) at 1.2 million hours. That's competitive with the MTBF rating of other MLC-based flash drives and equivalent to common MTBF ratings for enterprise-class mechanical hard drives.
Old article but still comparative data. Newer 'MLC' SSD have a typical MTBF of '1,500,000 hrs'. Wear leveling & write amplification have greatly improved for newer controllers using 'MLC'.
If I never install another application and just go about my business, my drive has 203.4GB of space to spread out those 7GB of writes per day. That means in roughly 29 days my SSD, if it wear levels perfectly, I will have written to every single available flash block on my drive. Tack on another 7 days if the drive is smart enough to move my static data around to wear level even more properly. So we’re at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 360,000 days of using my machine the way I have been for the past two weeks for all of my NAND to wear out; once again, assuming perfect wear leveling. That’s 986 years. Your NAND flash cells will actually lose their charge well before that time comes, in about 10 years.
This assumes a perfectly wear leveled drive, but as you can already guess - that’s not exactly possible.
Also take into consideration, 5GB of writes a day is like 50 times the average person whose writes is going to be just the browser cache and that's about it.
That is one of the reasons to use some older data relative to 'MLC' for both the density differences today and the new controllers to have a fair comparison. 'wear-leveling' & 'write amplification' are the two major variables that everyone should pay attention too.
People who use a 'MLC' based SSD in a Enterprise and then complain when wear or failure occurs should not be maintaining let alone build a system around 'SSD MLC' technology. Instead a competent admin would choose a EFD 'SSD SLC' technology for the Enterprise and not worry about the cost factor. Right tool for the job!
The first 25nm product is an 8GB (64Gbit) 2-bits-per-cell MLC NAND flash. A single 8GB die built on IMFT’s 25nm process has a die size of 167mm2. Immersion lithography is apparently necessary to produce these 25nm NAND devices, but the extent is unclear. This is technically Intel’s first device that requires immersion lithography to manufacture.
The above excerpt defines Intel's technology for MLC. Whole article is very informative.
Intel & Micron now use 20nm by way of IM Flash Technologies (IMFT) to double the density for newer drives. One of the reasons everyone is seeing a SSD resell/dump at a great savings. Newer SSD that use IMFT will have greater density with improved controllers.
Too much 'FUD' out there!
BTW, nice links to the Logic gates which we are not talking about. Plus the flash link is not valid for what we a speaking about. NAND Flash Technology has changed to 2 or 3 bit cells to be used within a 'SSD'.
You directly posted an about an SSD with NAND, so sorry to talk about it *rolleyes*
We are talking about 2-3 bit NAND Flash memory not a Boolean NAND logic gate. You can example the NAND using a DIP or transistor(s). Sure the Boolean function is a good example. Your logic gate examples are for the Boolean NAND and yes you can build a memory using logic gates.
We can use the logic tree to show the function for a NAND therefore the state of the memory cell we are using.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.