[SOLVED] If you use a lot of fsync () at nand flash(TLC) rapidly decreasing the lifespan?
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The more you write to a flash disk, the sooner it will wear out.
If you use that program which has no delay, infinite looping, it will eventually wear out a block on the flash disk, and then another, and another, and so forth.
What can happen is with write leveling, the different iterations of creating a new file can result in uniform writes to all free locations until you've largely written to all of the free locations one time, and then a repeat of that process.
So say you can write to any given location 10 times. And say further that there are Z locations. So it will either be:
Location A: Write first time
Location A: Write second time
... continue up to the 10th time
Now you can't write to Location A anymore, so you start writing to Location B
Eventually you can't write to Location B, so you move to C, D, ... Z
And now the disk is no longer write-able
Location A: Write
Location B: Write, and continue till you get to Location Z
Back to Location A: Write, and continue to Z a second pass
Eventually you'll loop 10 times and have written to all locations the allowable amount of times
End result the disk is no longer write-able
The more complex points here are:
Write cycle times are VERY HIGH in today's disk technologies
Given that they are so high, one write, ten writes, 100 writes, are not a big deal, it is partially a probability problem, when your limit is 2 Million writes, that's the advertised guarantee of some disks, and so some, maybe many locations may work with much higher write counts, and the manufacturer's intention is that very few, to no locations work with lower than 2 million write counts
Disks are very large, therefore whether or not they have block sizes for writes (they do) there's still very, very much data available for free space, provided your disk isn't largely full already.
I'm not sure writing one byte uses 4K, likely 256 bytes. Even so, 4K compared to 4G is a factor of one millionth.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.