A techincal question about USB flash drives or ssd's in general.
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A techincal question about USB flash drives or ssd's in general.
I am reading up on solid state devices. I understand that if you format a USB flash drive, it wears the cels on the device. However, for sanitation purposes, what I do is overwrite with 0's anyway.
I read somewhere, that the controller for the device does maintenance to keep the drive alive. So there is no guarantee that a cel is cleared.
I'm not sure I have a good understanding of this, so let me ask a question about this.
If I look at the drive size with fdisk and 0 the drive to that many bytes, then check the drive for 0's and it clears. It means the controller did overwrite all the cels. Or is that something else?
The explanation said that some cels that fail are not used.
Does this also mean that if I check the disk size later, it will be smaller? or will it have badblocks?
I would like a clear understanding of this, so please if you can, go into detail about this for me.
Distribution: Currently: OpenMandriva. Previously: openSUSE, PCLinuxOS, CentOS, among others over the years.
Posts: 3,881
Rep:
SSD's (Solid State Drives) and more traditional "mechanical" drives work in a different way because there are diffrent technologies involved. The more traditional "mechanical" drive has a spinning platter (where the data itself is stored) with a read/write head that spins to witch ever part of the platter has the data, that's being requested by the system.
SSD drives have chips that hold the data instead of a spinning platter. Data on SSD drives never gets overwritten, a certain number of file system blocks get reserved for when new data is written, with the old blocks then being 'marked' to be erased at some future point in time.
Only if you filled the WHOLE drive with zeroes would you stand any chance of data being overwritten, and even then the controller might swap some of the reserved cells in and out so some might be missed.
Writes are accomplished by first issuing a delete and then a write. The controller will mark data you delete as deleted and will wait for a quiet time to actually delete them unless a write is issued by the controller to that cell.
Also SSDs will rearrange data to make access more efficient- called housekeeping- if left powered up while the system is idle.
Distribution: Currently: OpenMandriva. Previously: openSUSE, PCLinuxOS, CentOS, among others over the years.
Posts: 3,881
Rep:
Quote:
Originally Posted by dave@burn-it.co.uk
Only if you filled the WHOLE drive with zeroes would you stand any chance of data being overwritten, and even then the controller might swap some of the reserved cells in and out so some might be missed.
Writes are accomplished by first issuing a delete and then a write. The controller will mark data you delete as deleted and will wait for a quiet time to actually delete them unless a write is issued by the controller to that cell.
Also SSDs will rearrange data to make access more efficient- called housekeeping- if left powered up while the system is idle.
David, SSD drives do NOT overwrite data, the blocks must be erased first, BEFORE any data can be written to it. Zero'ing an SSD drive is the worst (or close to it) thing you could do!
Why don't you read posts before you make such comments???
You even quote what I said, but still did not read it.
To REPEAT: "Writes are accomplished by first issuing a delete and then a write."
At no time did I suggest that writing zeroes to it was a good idea. In fact I implied that it would not achieve the desired result.
Main articles: Wear leveling and Write amplification
If a particular block was programmed and erased repeatedly without writing to any other blocks, that block would wear out before all the other blocks — thereby prematurely ending the life of the SSD. For this reason, SSD controllers use a technique called wear leveling to distribute writes as evenly as possible across all the flash blocks in the SSD.
In a perfect scenario, this would enable every block to be written to its maximum life so they all fail at the same time. Unfortunately, the process to evenly distribute writes requires data previously written and not changing (cold data) to be moved, so that data which are changing more frequently (hot data) can be written into those blocks. Each time data are relocated without being changed by the host system, this increases the write amplification and thus reduces the life of the flash memory. The key is to find an optimum algorithm which maximizes them both.[61][62]
On occasion, users may wish to completely reset an SSD's cells to the same virgin state they were manufactured, thus restoring it to its factory default write performance. Write performance is known to degrade over time even on SSDs with native TRIM support. TRIM only safeguards against file deletes, not replacements such as an incremental save. Warning:
Back up ALL data of importance prior to continuing! Using this procedure will destroy ALL data on the SSD and render it unrecoverable by even data recovery services! Users will have to repartition the device and restore the data after completing this procedure!
Do not proceed with this if the target drive isn't connected directly to a SATA interface. Issuing the Secure Erase command on a drive connected via USB or a SAS/RAID card could potentially brick the drive!
You can dig deeper into the above mentioned wiki pages to gain better understanding.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
What I understand is that when a SSD is filled to almost its maximum, writes are performed at the same cells over and over. So if you have 100 MB of free space and you write/replace 50 MB every day, cells are being written every two days. If the cells lifetime is 200 write cycles, the average failure occurs in 400 days. Average, which means that failure can occur much sooner.
If, OTOH you have 4 GB of free space on that drive, the same cells are only written every 80 days. So it takes much longer before 200 write cycles are reached for all cells.
This is not in contradiction with OneBuck's quotes. Just my rule to increase reliability on SSD drives. Make sure you have plenty of free space.
I think you will find that the housekeeping done by the controller will move data around to ensure that those frequently written cells are moved around the drive to ensure wear levelling.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
Errr... yes, but does that improve anything? If I have a 4GB SSD and 3.9 GB of static data? Either the wear leveling algorithm does nothing and my statement holds. Or the wear leveling algorithm moves around 50 MB or 100 MB and does another write operation into that free space. The only difference is that it is static data written into that space instead of my dynamic data.
Most of my usb drives have been good until I do something IO intensive on them. Like compiling sources. Or downloading torrents. I've used most of them as my primary linux install > 6 months. The ones I do IO stuff on seem to last half that time or less. I've only had one fail so far, but I do have trust issues. Avoid getting them > 50% full and you can avoid a lot of issues.
The only difference is that it is static data written into that space instead of my dynamic data.
Yes but static data is updated less often and it is also revisited and reorganised dynamically. Data is also re-arranged so that often changed small files are placed together in the same 4k block wherever possible and will therefore be more likely to remain in buffers and not need rewriting as often.
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524
Rep:
SSDs parallelize reads and writes. Flash chips are actually slower than metal platters. But SSDs write to many different chips at once. By the time the first chip is done with the write, the controller is issuing another write to it.
Because of the almost random nature of writing a SSD, even if you try to fill it up with zeroes, it probably won't be full. But every SSD manufacturer makes an erase utility for their drives. That's what should be used. But if the information is of great importance, the drive should physically shredded. A blender works well for SSDs, and smart phones too.
USB flash drives also parallelize writes, but the firmware is etched in the controller. It's not very sophisticated. Some of the operating software actually exists on the drive. If you take a 30MB/s drive, write zeroes to the first MB or two several times, and then reformat it, it will be a 3MB/s drive after that.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.