Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
11-17-2009, 04:36 PM
|
#1
|
Member
Registered: Jun 2009
Posts: 56
Rep:
|
will linux drivers distribute writes over a flash drive to avoid memory wear?
Okay we have a program which runs on a flash drive and was intended to do write to the hard drive on a semi-regular basis. These writes are all happening in a few small files which are always reside at a specific location on the hard drive. Only after we developed most of it did I remember that the main weakness of flash is memory wear from too many writes/erases.
I'm trying to determine how long our application can run before wear becomes an issue, which seems to come down to an issue of how Linux will manage writes to the drive. I don't expect anyone to know exactly how my distro (RHEL 5.2) would handle this, but I have some general questions to make sure I correctly understand the issue...
1) I know that there are means of minimizing the wear a drive by tracking the number of writes on each sector and distributing writes over all sectors. If Linux does this the drive should last longer then the expected life of our SBC. But does Linux know to do this? Can it detect that it's running on a flash drive instead of a regular hard drive and will it handle writes differently if it does?
2) Is there a way to install a driver or otherwise modify how writes are done to flash?
3) If sectors start failing due to wear will red hat detect the bad blocks and move the files being overwritten to good block? And what kind of negative effect would having a flash drive with say half of it's sectors failing have to my application, assuming that red hat has quarantined the bad sectors and that only a fraction of the drive is necessary for the running application?
4) If I’m constantly writing to the same location on the harddrive will each write be written to the harddrive? Or will it be stored somewhere in memory and never be written back to the harddrive (sense I'm accessing it so often)?
5) Can anyone suggest a link to where I would look to get specific answers?
I should also point out that none of the files being written are consider high priority, just error logging and saving of user preference/state. A single incorrect bit is little more then a nuisance.
Yeah I think that's everything I can think of. Any information or links to decent resources is appreciated.
Thank you
|
|
|
11-17-2009, 04:54 PM
|
#2
|
Senior Member
Registered: Jan 2006
Posts: 4,363
Rep: 
|
Last I checked the 'where things are written on a flash drive/card' issue was determined by the controller chip on that card. So this will be a card specific issue.
You can also look at moving a lot of the system to ram(the part that gets written a lot). There have been several threads on this option.
|
|
|
11-17-2009, 06:04 PM
|
#3
|
Moderator
Registered: Aug 2002
Posts: 26,853
|
AFIAK its dependent on the hardware and filesystem drivers and not the OS itself. What is the make/model of SBC and what type of flash drive are you using? I would assume it uses wear leveling technology and should not be a problem. Typical plain (i.e. no special hardware of any kind) flash memory is limited to 100,000 erase cycles. If you completely rewrite the memory 10 times a day it would take approx 27 years to wear it out. Wear leveling can improve this to 1,000,000 erase cycles. Using a non-journaling filesystem like ext2 and turning off swap would also help to keep wear down.
As a side note in the early days of EEPROMs due to an errant loop I wore out several in the blink of a cycle before I discovered the problem.
|
|
|
11-18-2009, 03:12 AM
|
#4
|
Senior Member
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070
|
When you say 'flash drive' you need to be aware that the thumb drives are completely different from the SSDs (which have a controller, spare capacity and wear levelling, which may work better or worse, depending on the algorithms employed).
Quote:
1) I know that there are means of minimizing the wear a drive by tracking the number of writes on each sector and distributing writes over all sectors. If Linux does this the drive should last longer then the expected life of our SBC. But does Linux know to do this? Can it detect that it's running on a flash drive instead of a regular hard drive and will it handle writes differently if it does?
|
Linux doesn't do this by default, although you may find that choosing the right file system and setting it up correctly may well produce exactly this effect. But then, if the SSD is doing it all by itself, why would you need to do it as well?
|
|
|
11-18-2009, 04:09 AM
|
#5
|
Member
Registered: Jul 2007
Posts: 59
Rep:
|
Quote:
Originally Posted by michaelk
If you completely rewrite the memory 10 times a day it would take approx 27 years to wear it out. Wear leveling can improve this to 1,000,000 erase cycles.
|
27Years we can expect flashdrives in terms of TBs 
|
|
|
11-18-2009, 09:45 AM
|
#6
|
Member
Registered: Jun 2009
Posts: 56
Original Poster
Rep:
|
you see this is why I asked here before trying to dig too deep into this, it seems I was thinking this was software issue when it was a hardware/firmware issue. Thank you for saving me a good bit of confusion before I figured that out
I suppose I'll look into our SBC's SSD to figure out if it does wear leveling (although now that I realize it's a hardware issue I suspect any good SBC would).
Quote:
Wear leveling can improve this to 1,000,000 erase cycles. Using a non-journaling file system like ext2 and turning off swap would also help to keep wear down.
|
Even with wear leveling we still would only get 1,000,000 cycles total? We have 14 applications which could potentially modify a file (different file for each app) once every 30 seconds. Realistically they won't write nearly as often as that, but still if I understand correctly that means our worst case scenario would have us having issues as soon as a month after going active? Or am I misunderstanding your statement?
Any idea how much of an effect the ext2 and deactivating swap would have on expected runtime?
Last edited by dsollen; 11-18-2009 at 09:51 AM.
|
|
|
11-18-2009, 01:41 PM
|
#7
|
Member
Registered: Feb 2007
Posts: 142
Rep:
|
Quote:
Originally Posted by dsollen
Even with wear leveling we still would only get 1,000,000 cycles total? We have 14 applications which could potentially modify a file (different file for each app) once every 30 seconds. Realistically they won't write nearly as often as that, but still if I understand correctly that means our worst case scenario would have us having issues as soon as a month after going active? Or am I misunderstanding your statement?
Any idea how much of an effect the ext2 and deactivating swap would have on expected runtime?
|
Unless you are writing to the same spot on the SSD, you won't see any problems anytime soon. It's the writes to the individual memory cells that cause the wear but even the worst controllers don't continually rewrite to the same spot.
By the time you can reasonably expect any problems you can also expect at least one of the following to occur:
- you discover you need a larger drive
- the drives have become so cheap in the size you want that replacement isn't a problem
- the technology has been supplanted by something even faster.
However, I'd also specify noatime on the mount to reduce the number of writes to the directory and improve performance.
|
|
|
11-19-2009, 01:09 PM
|
#8
|
Member
Registered: Jun 2009
Posts: 56
Original Poster
Rep:
|
Quote:
Unless you are writing to the same spot on the SSD
|
Well that is sort of my concern. We are writing to the same (small) file which should be saved on the same spot on the drive. More accurately were saving to 14 different files that are each saved at specific spots on the drive. I understand the point that the flash should hold out ‘long enough’ to not be an issue, but I want to be able to report to the others on my team as accurate an estimate as possible in terms of expected runtime before flash limitations become an issue
So if I understand correctly 1,000,000 writes to any one of the files will be (approximately) the limitation on my system, the fact that 14 different apps are each writing to different files doesn’t matter, just the number of writes done to a specific file? Which at a worst case scenario of 2 writes a minute means about a year before I would hit the 1,000,000 writes mark? In other words is the 1,000,000 writes estimate you mentioned the number of writes to an arbitrary sector or the total number of writes?
|
|
|
11-19-2009, 01:59 PM
|
#9
|
Moderator
Registered: Aug 2002
Posts: 26,853
|
|
|
|
11-20-2009, 09:02 AM
|
#10
|
Member
Registered: Feb 2007
Posts: 142
Rep:
|
When you say writing to the same spot on the drive, am I to understand that you are bypassing both the file system and the drive's firmware? Otherwise, the drive's firmware does the wear leveling.
Just because you are writing to the same spot in a file doesn't mean that you writing to the same spot on the physical device. Indeed, if you try writing a small file over and over again to an SSD, you will eventually write to every spot on the SSD. At that point, performance will deteriorate on many (esp. older) SSDs as it has to free up space from the "used" pool before it can be written to again.
Many newer drives avoid, or at least reduce, this problem so they maintain a more consistent performance level.
|
|
|
All times are GMT -5. The time now is 05:33 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|