Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I was looking into getting some solid state drives for the 1st time. I have always used Seagate traditional SATA drives for my home systems but I think I would like to try something new and that has much better performance. Do you guys know if I will see any performance gains and or issues using SSD on Linux? I run Arch and Debian Linux in general...
The solid state drives would offer a performance boost.
However, I do wonder about the lifespan (longevity) of the drives.
The media will only stand so many re-writes.
The embedded controllers are designed to use a write strategy that will minimize the possibility of excessive writes to the same area (consecutive writes being located at different locations on the drive media to prevent a single area from being continuously written to and failing as a result.
It makes me ponder the problem of a single solid state drive Linux system where the drive is largely full of unchanging allocated data, and the swap file ends up residing in the same space on the drive most of the time. Could it lead to a premature failure of the drive due to the same area being subject to continuous writes?
Careful examination of the drive specs for total surface writes before failure would seem to be a crucial point in considering which drive to purchase.
The solid state drives would offer a performance boost.
However, I do wonder about the lifespan (longevity) of the drives.
The media will only stand so many re-writes.
I've seen several calculations similar to that one in the article I just cited where if you overwrite the entire disk 3 times a day it will last over 50 years. And that's for a 64GB SSD. My 256GB SSD will last over 200 years.
So in reality you'll be getting a new computer long before any SSD fails because of read/writes. It's better to read and verify these things yourself. Don't believe me, apply the math. I've done it which is why I'm citing that article.
Quote:
Originally Posted by cgtueno
The embedded controllers are designed to use a write strategy that will minimize the possibility of excessive writes to the same area (consecutive writes being located at different locations on the drive media to prevent a single area from being continuously written to and failing as a result.
That is somewhat true.
If you're really worried about this (you shouldn't be) but if you are then you could always mount your logs, tmp folder, and your temporary internet files into a RAM drive through fstab. I did that on my old EEE PC netbook because its SSD is older and more subject to the limited lifetime writes thing that cgtueno is talking about. Try doing that in Windows
A real hard drive even today should out perform a ssd. What most likely will happen though is your sweet ssd will become just too small to be of any use before it actually fails.
There are a number of issues too that need to be looked at to help speed them up and extend life.
Thanks for citing the article.
Yes indeed I read it thoroughly.
If the crux of the argument is that modern SSD's will not suffer from degradation because the required re-writes to cause damage to a specific area of the drive cannot be achieved (eg. cited 51 years in the scenario in the article) before either the drive dies due to a failure (any part of the drive as an entity) inline with the manufacturer's/designer's calculated MTBF or the SSD becomes obsolete (due to usefulness within its application- ie. system it is installed in becomes obsolete or defective, etc) then (smiles) in view of the data being cited it seems that this problem isn't significant.
I concur with the cited author's comments about the manufacturer's specs being difficult to interpret. However, recent specs seem to be much improved over what I had read in yeas gone by.
Still, in general in the real world, storage devices don't always meet the calculated MTBF and longevity expectations published by designers and manufacturers. There are a lot of environmental factors that affect the longevity of the devices, even SSDs. Variations in cooling effects, etc.
As cited in the article (and elsewhere) there are a number of strategies employed in the SSD controllers to minimize excessive writes to the same areas of the SSD. (Up to and including the provisioning of reserved areas for substitution in the event an area becomes "unreliable" or permanently defective (not just at point of manufacture), write policies, etc). This sort of information is seldom discussed in detail by designers and manufacturers. Indeed if we agree 100% with the maths offered in the article there would be no need for such strategies except in the event of highly improbable faults. Yet these provisions are still incorporated into the design of the drives.
Also note that the cited article refers to optimal conditions, stating that he has reviewed his position as the devices have improved in capacity, and calculated longevity.
Don't get me wrong. I agree that the discussion makes theoretical sense (51 years sounds like a "nice" number). However, we all know that when you install a drive (any type of drive) into a system the environmental factors alter the situation reducing the longevity of the device (eg. cumulative effects of heat, etc).
Perhaps my advice is out of date. But still I wonder.
The only way to resolve the issue is to cite some actual physical destructive testing data for modern SSD drives. However, I can't seem to find any (lol). If you come across any I would be most interested to read it (Not a flame comment - a genuine expression of interest).
In the distant past I have had to do rough estimates for the longevity of conventional electro mechanical hard disk drives used in computer systems running 24Hrs x 365 days to allow preemptive replacement. The actual versus theoretical (calculated based on provided data) drive life is often astonishing.
Good to cite the the EEEPC solution, a lot of people got caught with that particular instance.
In a nut shell. If the manufacturing costs of SDD come down sufficiently, and the retail channel follows suit, and the capacities continue to rise, then SSDs will become the norm. As you correctly say we would in theory be chucking the systems before the SSDs fail, or at least chucking them due to obsolescence due to increasing capacity, speed, low cost etc. However, we aren't there yet (especial cost wise).
C.
PS. If you are interested have a look at the approach that Google took in creating it's server farms (available on the www), and their reasoning behind their approach.
A real hard drive even today should out perform a ssd. What most likely will happen though is your sweet ssd will become just too small to be of any use before it actually fails.
Would you define the statements better? What's a real hard drive to you? Spinning media? A DRAM based drive would out perform a NAND based unit or a platter based unit. Problem for a DRAM based is the density & backup which depends on a battery.
Quote:
Originally Posted by jefro
There are a number of issues too that need to be looked at to help speed them up and extend life.
A real hard drive even today should out perform a ssd. What most likely will happen though is your sweet ssd will become just too small to be of any use before it actually fails.
I agree with onebuck. What do you mean? I'm going to assume you mean a 7200RPM spin up HDD. If that were the case the transfer rate across a SATA II line would be 60MB/s when you load it from RAM. I've seen transfer rates of over 300MB/s when you load a program into RAM off of my C300 256 (SSD). You should check out benchmarks on the C300 and other SSD drives. You may be delightfully surprised.
Here's a nice benchmark showing you graphs of performance in MB/s over time comparing several different brands of SSD drives. I have benched my own C300 and have seen similar results and right now it is currently the fastest SSD on the market. You should bench the high capacity hard drive that you have in your computer right now and compare your results to the graphs in the articles. From that you should come up with your own conclusions. Are SSD drives really that slow?
At my work we built a laptop for the chairman. We had a Core i5 Lenovo X201. We chucked out the 320GB spin up drive and put in a 256GB C300 SSD. I've never seen such a speed increase in programs. Here's some statistics of my experience with that.
Boot time 10s (including BIOS time).
Login and ready to be worked in 3s.
Loaded MS Office (any program including database) in less than a second.
Loaded Adobe Acrobat in less than a second.
Loaded Adobe Photoshop CS3 in 1s (first run). All consecutive runs the program opened faster.
And all of those programs were loaded over SATA I which bottlenecked the C300 to 150MB/s constant. I shutter to think if we used it in a SATA II system where it can break 300MB/s.
I used to hate on SSD drives as well until that experience. It really made me realize how much of a bottleneck a hard drive is in modern systems. RAM transfers at 3.2Gb/s (in the old DDR). I believe that SATA III hard drives will be able to breach 600MB/s. It's great to live in the future .
Quote:
Originally Posted by jefro
There are a number of issues too that need to be looked at to help speed them up and extend life.
Can you please list them so that we may either agree or dispute them with evidence or experience?
Quote:
Originally Posted by cgtueno
In the distant past I have had to do rough estimates for the longevity of conventional electro mechanical hard disk drives used in computer systems running 24Hrs x 365 days to allow preemptive replacement. The actual versus theoretical (calculated based on provided data) drive life is often astonishing.
I'm curious what factors you took into account to calculate the lifetime of the drives. Do you have any equations you could possibly provide? I'd be interested to try my hands at them. I've done something similar at work but for us drives are cheap and data is expensive so we throw money at the problem with duplicates. We also calculate the daily and monthly data written to the drive so that we may calculate the "lifetime" of the cluster so we know generally when we need to upgrade our capacity. That way it's a planned upgrade and not some random act. That's why I'm curious of your drive lifetime calculations. Thanks for sharing!
Quote:
Originally Posted by cgtueno
In a nut shell. If the manufacturing costs of SDD come down sufficiently, and the retail channel follows suit, and the capacities continue to rise, then SSDs will become the norm. As you correctly say we would in theory be chucking the systems before the SSDs fail, or at least chucking them due to obsolescence due to increasing capacity, speed, low cost etc. However, we aren't there yet (especial cost wise).
In my opinion they're not entirely too expensive when it comes to flash memory, the capacity, and especially the speed. Sure, the price/GB is much higher than say a 2TB spinup drive. But if you compare the price/GB to a normal flash drive it is generally reasonable for a 256GB capacity drive to be $600 based on that comparison. I'm not saying they're not expensive but what I'm saying is, for what you get, it's pretty inexpensive.
Quote:
Originally Posted by cgtueno
PS. If you are interested have a look at the approach that Google took in creating it's server farms (available on the www), and their reasoning behind their approach.
Yeah! My boss turned me on to their papers. The tiered storage is pretty wild. A stack of solid state drives at the front, and large capacity drives behind. Based on how often something is served will determine if data resides in the faster SSD tier or the slower large capacity spinup portion. I still need to read up more about it. Thanks for the reference!
Do you know the relative electrical power consumption of SSD and spinup? Presumably we would need idle, contiguous read/write and non-contiguous read/write to get a meaningful picture.
Regards Google's tiered storage (a.k.a. HSM), I recall Novell announcing a similar architecture around 1996 but they are not mentioned in the Wikipedia HSM article. HSM makes so much sense, especially now SSD fits fairly nicely in the transfer-rate/capacity chasm between RAM and spinup.
Unfortunately I don't have any real world experience regarding the power consumption of an SSD vs Spin-up. Like I said, hard drives are cheap and data is expensive so we throw money at the problem . I am also interested to know of the power consumption statistic so could anyone enlighten me? Thanks for the HSM catkin. I feel like Novel has done a lot of firsts. Like their NFS which was basically copy pasta'd by Microsoft into SMB. Like they say though, first it's a race to who can do it first, then it's a race of who can do it better. The flight of the Wright brothers and later other companies is a perfect example of that (though that's for another discussion another time).
Power consumption of the C300 is between 94mW (standby) and 4.3W (active).
I just looked at a 7200RPM 2.5" spinup hard drive and it runs at 5V @ 800mA. So 5(.8)= 4W (active) for a laptop drive.
I just looked at a 7200RPM 3.5" spinup HDD and it runs at 12V @ 800mA. So 12(.8)= 9.6W (active) for a desktop drive.
Even under max load the SSD is less than half of a server spinup hard drive and equivalent of a laptop drive at a constant rate.
And this is with the C300 which is the most monstrous SSD. I'm sure the other low power SSD drives use even less power.
When you put the load into perspective with a screen or a processor then the power consumption of the Hard Drive (any drive) is almost null in comparison with respect to the PSU (power supply).
So I'm going to call the power argument a bust because SSD gives you equivalent or less power consumption for more than 5 times the read/write performance. I'd say that's why google would use it in their tiered storage. Right now the only thing holding back and SSD is capacity but even that is going away as an argument as they are getting larger and larger (up to 256GB is the largest I've seen).
So if I have a 1Tb green drive that takes 5.4 W read write then how many C300's need to be used? Bet it takes more than 5.4W to get a 1Tb stored. The actual compare needs to be apples to apples. Don't get me wrong. I think ssd's are the way of the future. Just can't use them in place of everything yet.
A watt is a watt no matter the input voltage. Volts times amps = watts.
We are comparing apples to apples. I presented all of the calculations in my previous post. If you transfer 1TB over an SSD or over a spinup drive it's still going to use it's rated wattage. So if it has less watts then it's going to use less power. Plain and simple. I showed that in my calculations. Active means it's transferring with max power.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.