Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
If I output the BS to 4M, how will the disk get it back to 512? [...] How do I get it back to 512 after making 'dd' bs=4M?
No worries. You don't have to do anything. The block-size parameter 'bs' only affects the size of the data block passed in memory to/from the hard drive. It does not actually change the hard drive's physical sector size, or the sector size as it might be defined/formatted in the filesystem. The reason we are talking about 'bs' is that the data transfer is supposed to be more efficient (there is less overhead in setting up the transfer) if you pass fewer large blocks rather than a large number of small blocks.
If you are curious, you can experiment. I wrote this little script:
This is not exactly the same as you are doing, but it's close. I am writing files to a FAT filesystem on a USB hard drive. Here is my output:
Code:
512000+0 records in
512000+0 records out
262144000 bytes (262 MB) copied, 7.66652 s, 34.2 MB/s
64000+0 records in
64000+0 records out
262144000 bytes (262 MB) copied, 9.29928 s, 28.2 MB/s
4000+0 records in
4000+0 records out
262144000 bytes (262 MB) copied, 9.27168 s, 28.3 MB/s
1000+0 records in
1000+0 records out
262144000 bytes (262 MB) copied, 9.04747 s, 29.0 MB/s
Wait a second... It's faster with 512-byte blocks. Go figure!
If I repeat the exercise with if=/dev/urandom, larger blocks go significantly (about 2x) faster than small blocks, but of course not as fast as /dev/zero.
* Erase the partition table with $sudo dd if=/dev/urandom of=/dev/sda count=1
* Create random size partitions with random file systems
* Delete all partitions
* Erase the partition table again and then create the partitions I needed.
If you want to erase the drive, you don't have to do all that with random partitions and random file systems. All the user information, and that includes the partition table, filesystems, directories, and files, is laid out in /dev/sdX (where 'X' is the hard drive index, for example 'a' in /dev/sda). If you simply overwrite the entire space of /dev/sdX, you have done it all.
If you want to erase the drive, you don't have to do all that with random partitions and random file systems. All the user information, and that includes the partition table, filesystems, directories, and files, is laid out in /dev/sdX (where 'X' is the hard drive index, for example 'a' in /dev/sda). If you simply overwrite the entire space of /dev/sdX, you have done it all.
But that would take too long. All I'm doing in the above steps is making data inaccessible to the system, because even after erasing the MBR my data was accessible to the system. So creating random sizes partitions, deleting them all, deleting the partition table and then creating the partitions I need make it impossible for any OS to see what was in there.
Would there be a problem to use Seagate's Disk Wizzard to do this Zero-Fill operation?
After a quick look at the Seagate DiscWizard User Guide, I would say it looks good. It has some useful options (multipass overwriting, overwrite with random numbers or defined bit patterns), and it has some safeguards against accidental erasure of the wrong drive. Try it and see how it works.
But that would take too long. All I'm doing in the above steps is making data inaccessible to the system, because even after erasing the MBR my data was accessible to the system. So creating random sizes partitions, deleting them all, deleting the partition table and then creating the partitions I need make it impossible for any OS to see what was in there.
Repartitioning makes it harder, but even in the absence of a valid partition table or directory, many types of files (I'm thinking of photo, video, and music) can be reconstructed by recognizing standard file headers and internal data structures. If the file is contiguous (not fragmented), this should be easy. If it's fragmented, it may still be possible, though I don't imagine it would be easy or certain.
If my money or reputation depended on files being unrecoverable, I would wipe the entire drive space. For my 2TB with USB 2.0, that would take about 20 hours per pass. I would set aside a weekend, or a week, and just do it.
Edit: Sorry, I didn't notice the block size question was already answered.
The dd block size is logical. It doesn't have any bearing on the physical sector size on the disk. It's just the number of bytes that dd writes in each i/o operation.
Last edited by Z038; 04-26-2013 at 02:45 PM.
Reason: already answered
After a quick look at the Seagate DiscWizard User Guide, I would say it looks good. It has some useful options (multipass overwriting, overwrite with random numbers or defined bit patterns), and it has some safeguards against accidental erasure of the wrong drive. Try it and see how it works.
Tried. Result: "At least one Seagate Drive must be installed". LOL
Repartitioning makes it harder, but even in the absence of a valid partition table or directory, many types of files (I'm thinking of photo, video, and music) can be reconstructed by recognizing standard file headers and internal data structures. If the file is contiguous (not fragmented), this should be easy. If it's fragmented, it may still be possible, though I don't imagine it would be easy or certain.
If my money or reputation depended on files being unrecoverable, I would wipe the entire drive space. For my 2TB with USB 2.0, that would take about 20 hours per pass. I would set aside a weekend, or a week, and just do it.
I was talking about being recoverable automaticly by the system =)
Some day I had conflicting files on a Ubuntu install. I wiped out the partition table, and after formatting and installing the system those files were still there causing me problems. That's why I do partitioning at random sizes, this way I guarantee the system will not recognize those files in the future.
It may not fit your requirement of being fast. I suspect it'll run for a couple of days on a 1TB drive.
Filling my old 320GB with zeros took 1 hour, filling with ones and then zeros took 2 ours. I SUSPECT this drive will take about 9 hours. Correct me if I'm wrong.
ubuntu@ubuntu:~$ sudo dd if=/dev/zero of=/dev/sda bs=16M
dd: writing `/dev/sda': No space left on device
59617+0 records in
59616+0 records out
1000204886016 bytes (1.0 TB) copied, 12274.5 s, 81.5 MB/s
Those of you who did already, did it go OK? Why the no space left message? I suppose it went OK since the copied size was 1.0TB.
Those of you who did already, did it go OK? Why the no space left message? I suppose it went OK since the copied size was 1.0TB.
Yes, it went OK. Apparently dd doesn't bother to check the disk size (or file size, if you choose to output to a file). It just keeps rolling until it runs out of space.
For output to a pre-existing file, you need to use the count= parameter, and set count*bs equal to the file size. If you don't use count=X, it will keep going, and the file will grow until the disk (or filesystem) is full.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.