Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
They basically all do the same thing, therefore dd is good as you don't need to install anything. The only thing with dd is that it defaults to 512 byte sectors which takes too long as most file systems write data to blocks of many 512 byte sectors. Dcfldd will default to block size. By defaulting to block size, the process is much faster, and if you're not concerned about "secure wipe" for security, block size is the way to go.
To find the block size of the file system, simply issue command: sudo blockdev --getbsz /dev/partition. Most often it will be 4K blocks, an example from my current Fedora is below after running sudo fdisk -l command to show /dev/partition:
You can also change the "bs=4096" to "bs=4K". The first example zero fills, the second writes random fill.
No other utility will be faster, as they all do the same thing and if they all default to block size, the speed is determined by your computer resources and drive speed.
Last edited by Brains; 01-27-2019 at 02:49 AM.
Reason: corrected incorrect command
Disks have bigger than 4K caches nowadays. Why not utilize some for more speed?
Code:
# dd if=/dev/zero of=/dev/sdg bs=48K
Because data is written to 4K blocks. If you want to "Wipe" a drive, you want to get every block. If you simply want to erase partition information, you only need to wipe a small part of the beginning.
Keep in mind, the OP asked this question, "Whats the current way to wipe a HDD?", and I answered the OP's question, if you have a question of your own, start another thread.
If you simply want to erase partition information, you only need to wipe a small part of the beginning.
Exactly.
Quote:
Keep in mind, the OP asked this question, "Whats the current way to wipe a HDD?", and I answered the OP's question...
That you did, and more. My question was about the propriety of your recommending bs=4096 when performing a wipe, leaving open a question of whether bs=4096 is some ad hoc improvement over no bs at all, or an optimal one, not an ipso facto reason for a new thread unless someone wants a new one.
That you did, and more. My question was about the propriety of your recommending bs=4096 when performing a wipe, leaving open a question of whether bs=4096 is some ad hoc improvement over no bs at all, or an optimal one
You need to re-read my original post a few times to answer your question.
You need to re-read my original post a few times to answer your question.
I'm not sure it's all that helpful to a dd context. My understanding of "optimal I/O" outside thisdd recommendation context has to do largely with maintaining sector alignment on 512E drives, and filesystem structure management, not necessarily related to throughput for multi-GB or TB full-disk dd transfers for which partitions and volumes have no relevance. I'm sure I've read variously 32K or 65536 or maybe even 128K recommended bs in dd (cloning) operations to speed the overall process up. Same would apply to full-disk wiping.
I'm not sure it's all that helpful to a dd context. My understanding of "optimal I/O" outside thisdd recommendation context has to do largely with maintaining sector alignment on 512E drives, and filesystem structure management, not necessarily related to throughput for multi-GB or TB full-disk dd transfers for which partitions and volumes have no relevance. I'm sure I've read variously 32K or 65536 or maybe even 128K recommended bs in dd (cloning) operations to speed the overall process up. Same would apply to full-disk wiping.
This is how I interpret the OP's question:
I've been using dd to wipe drives, but am under the impression it's inadequate compared to other wipe utilities, is there a better utility to wipe a drive?
My answer is no. They all do the same thing, and most utilities default to block/cluster size. Default block size for pretty much every file system utility is 4K, this is not absolute, you can define whatever block size you want when creating a file system. As such, dd can achieve the exact same results as any other wipe utility by setting the byte size to block size, because dd defaults to sector size which is a little better but time consuming. Which is why I mention how to determine the block size.
In other words, to answer the OP's question, no other utility will do a better job just like dd won't do a better job, but dd can do whatever type of wipe you want, be it non-secure, semi-secure, or secure wipe based on variables passed and amount of passes, just like any GUI wipe utilities.
I was not under the impression dd is inadequate. No impression either way. I wanted to confirm what I heard about a different way. I am aware that people will try to blow smoke up my pants and that new, bright and shiny is not always better. Good reply.
The point is that sometimes Mint has trouble installing to a drive that has something else on it. I one case a drive had LVM on it. I had to wipe the drive to get rid of LVM. I did not want to learn the in's and out's of LVM so I could replace it. I had no need for LVM.
Another option is partitioning in advance of starting any installer, you deciding what goes where and how much space it's allowed to use. Any installer that doesn't accommodate this I abort. My partitioning is always done in advance. Mint never forced me to abort, dutifully accepting my assignments just like other Debian derivatives.
Whatever partitioner is used in advance can delete whatever partitioning already exists without any need for prior wiping. To be sure, formatting of the newly created partitions prior to installation is also possible, and I do it too, including swap partition if necessary.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.