"dd if=/dev/urandom" must just repeat a small block over & over
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
"dd if=/dev/urandom" must just repeat a small block over & over
I used this command to wipe a small 8GB partition recently, then imaged the partition, then compressed it. The compressed file size was about 8MB.
I'm trying to figure out a way to over-write drives with random data and I thought creating an image and repeating it over & over might be faster than using the /dev/urandom command.
I have A LOT of drives to wipe and if anyone has any suggestions on the best way to do this would be appreciated.
Reading /dev/urandom could be much faster than reading a filesystem on a HDD.
No. /dev/urandom is much slower than the throughput of a disk (without taking filesystem performance into account). The pseudo random number generator is not designed for spewing out large amounts of data fast. And it is crappy as hell in this case - otherwise a compression factor 1/1000 would not have been possible.
OP: Why do you want to wipe the drive with "junk"? Why not just use zero's with /dev/zero as source? This is as fast as your drive can write.
If you want to wipe with something which looks like real junk, use an encrypted mapping with throwaway key and write zero's to the plaintext:
The reason for creating a random "junk file" was to store it on a RAM drive when wiping drives instead of using the urandom. IDK if I would use the whole file or take a section of it. IDK if random is better then all zero's, some say it is more secure in some cases.
The process is going to be handling many drives at once. The system can serve 20 HD's but IDK if the system could actually handle that many drives being zero's at once. I figured that the system would be really bogged down doing 20 urandom generations, but having it stored on a RAM drive would probably be much faster.
You saw 1000:1 compression? I think your /dev/urandom is broken! (What distro is that?) Maybe you could inspect it with hexdump to see if there is an obvious problem, like all zeroes.
I tried gzipping /dev/urandom at my end. With dd, I made a 1GB file, and gzip cannot compress it; the gz file is slightly larger (by about 170KB) than the original file. (I used gzip with the default settings, no user options.)
In any case, /dev/urandom is rather slow for wiping drives, but here is an easy trick to accelerate it. If it is acceptable to repeat a large block of pseudo-random numbers, you can write a block of /dev/urandom to the hard drive and then duplicate it like so:
Code:
# Example using 8MB block, with hard drive at /dev/sdx:
# First we write one block:
dd if=/dev/urandom of=/dev/sdx bs=8M count=1
# Then copy that block to the end of the available space:
dd if=/dev/sdx of=/dev/sdx bs=8M seek=1
You don't need a RAM drive because operating system will normally keep the 8MB (or even somewhat larger) block in a RAM buffer. I find that this method runs as fast as clearing the drive with if=/dev/zero. It doesn't get much faster than that.
You saw 1000:1 compression? I think your /dev/urandom is broken! (What distro is that?) Maybe you could inspect it with hexdump to see if there is an obvious problem, like all zeroes.
I tried gzipping /dev/urandom at my end. With dd, I made a 1GB file, and gzip cannot compress it; the gz file is slightly larger (by about 170KB) than the original file. (I used gzip with the default settings, no user options.)
In any case, /dev/urandom is rather slow for wiping drives, but here is an easy trick to accelerate it. If it is acceptable to repeat a large block of pseudo-random numbers, you can write a block of /dev/urandom to the hard drive and then duplicate it like so:
Code:
# Example using 8MB block, with hard drive at /dev/sdx:
# First we write one block:
dd if=/dev/urandom of=/dev/sdx bs=8M count=1
# Then copy that block to the end of the available space:
dd if=/dev/sdx of=/dev/sdx bs=8M seek=1
You don't need a RAM drive because operating system will normally keep the 8MB (or even somewhat larger) block in a RAM buffer. I find that this method runs as fast as clearing the drive with if=/dev/zero. It doesn't get much faster than that.
Ubuntu 14.10. I compressed it again and got the same 8MB file. I'll see if I can open the .img file in an editor and see what the result is. I did a cat and it does repeat a lot but I don't think I'm seeing the whole picture...
Oh yeah, you compressed an image. It might help if you tell us what commands you used to wipe the drive, create the image, and compress the image, and how large the image file was. I wonder if the wiped partition was correctly filled with random data but the imaging method reduced it, for example by ignoring unused space.
Oh yeah, you compressed an image. It might help if you tell us what commands you used to wipe the drive, create the image, and compress the image, and how large the image file was. I wonder if the wiped partition was correctly filled with random data but the imaging method reduced it, for example by ignoring unused space.
Sorry, I'm not a linux expert yet, opnly been really using it for 1 year.
To wipe the drive (8gb SSD drive partition), I unmounted it and ran:
Code:
dd if=/dev/urandom of=/dev/sdXXX bs=1M
I then used gparted to select all the area on the drive (sda4) and created an ext4 partition b/c dd wouldn't recognize the area without doing this. I than created an image by the following:
I then opened Dolphin, selected the file and compressed it with each of the 3 choices and they all come out to 7.9MB and about 20-30 seconds to process.
I'm not really sure about what you wrote that I bolded. The 8gb partition was wiped with random data and then a dd bit-for-bit copy was made giving me an 8GB .img file. Now if the urandom created a lot of empty space of like 1 million repetitions of the same 800KB random block, then that may be what was going on.
Sorry, I'm not a linux expert yet, opnly been really using it for 1 year.
Okay, no worries.
Quote:
Originally Posted by cilbuper
To wipe the drive (8gb SSD drive partition), I unmounted it and ran:
Code:
dd if=/dev/urandom of=/dev/sdXXX bs=1M
Do you mean of=/dev/sda4? I expect that would wipe it.
Quote:
Originally Posted by cilbuper
I then used gparted to select all the area on the drive (sda4) and created an ext4 partition b/c dd wouldn't recognize the area without doing this. I than created an image by the following:
If you execute that dd command as superuser (also known as root), the partition should be recognized by dd regardless of the existence of a valid file system or whether the partition is formatted or mounted.
Anyway, I suspect that gparted overwrote the random data when it created the file system. It may have an option for testing or clearing the partition prior to creating the basic framework of the file system. (A "quick format," on the other hand, would only lay down the framework but leave the unused data blocks alone.)
Quote:
Originally Posted by cilbuper
I'm not really sure about what you wrote that I bolded.
Now that I have a better understanding of your method, I think it was gparted that cleared the random data, not the imaging method which was dd. If gparted zeroed the blocks, then dd would simply package it, and the compression routine would squeeze out most of the zeroes.
See if you can omit the gparted step. Then I expect the image will have all the randomness that /dev/urandom put there, it will be incompressible, and you can use the uncompressed image to wipe other drives or partitions, if that's what you require.
With that many drives that need to be wiped, just wipe with '/dev/zero'. The only recovery methods available to reverse this are simply not practical. Unless you have mission-critical data that needs to be wiped, just zero the drives. Not to mention that just repeating a small block is no harder to recover than zeroing the drive.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.