LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 09-07-2015, 02:01 AM   #1
cilbuper
Member
 
Registered: Mar 2008
Posts: 141

Rep: Reputation: 0
"dd if=/dev/urandom" must just repeat a small block over & over


I used this command to wipe a small 8GB partition recently, then imaged the partition, then compressed it. The compressed file size was about 8MB.

I'm trying to figure out a way to over-write drives with random data and I thought creating an image and repeating it over & over might be faster than using the /dev/urandom command.

I have A LOT of drives to wipe and if anyone has any suggestions on the best way to do this would be appreciated.
 
Old 09-07-2015, 02:13 AM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,838

Rep: Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308
why do you think that? Reading /dev/urandom could be much faster than reading a filesystem on a HDD.
 
Old 09-07-2015, 04:59 AM   #3
cepheus11
Member
 
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286

Rep: Reputation: 91
Quote:
Originally Posted by pan64 View Post
Reading /dev/urandom could be much faster than reading a filesystem on a HDD.
No. /dev/urandom is much slower than the throughput of a disk (without taking filesystem performance into account). The pseudo random number generator is not designed for spewing out large amounts of data fast. And it is crappy as hell in this case - otherwise a compression factor 1/1000 would not have been possible.

OP: Why do you want to wipe the drive with "junk"? Why not just use zero's with /dev/zero as source? This is as fast as your drive can write.

If you want to wipe with something which looks like real junk, use an encrypted mapping with throwaway key and write zero's to the plaintext:

Code:
cryptsetup -c aes-xts-plain64 -s 256 -h ripemd160 -d /dev/urandom create /dev/<partition> wipe
dd if=/dev/zero bs=1M of=/dev/mapper/wipe
cryptsetup remove wipe
 
Old 09-07-2015, 08:07 AM   #4
cilbuper
Member
 
Registered: Mar 2008
Posts: 141

Original Poster
Rep: Reputation: 0
The reason for creating a random "junk file" was to store it on a RAM drive when wiping drives instead of using the urandom. IDK if I would use the whole file or take a section of it. IDK if random is better then all zero's, some say it is more secure in some cases.

The process is going to be handling many drives at once. The system can serve 20 HD's but IDK if the system could actually handle that many drives being zero's at once. I figured that the system would be really bogged down doing 20 urandom generations, but having it stored on a RAM drive would probably be much faster.
 
Old 09-07-2015, 10:25 PM   #5
Beryllos
Member
 
Registered: Apr 2013
Location: Massachusetts
Distribution: Debian
Posts: 529

Rep: Reputation: 319Reputation: 319Reputation: 319Reputation: 319
You saw 1000:1 compression? I think your /dev/urandom is broken! (What distro is that?) Maybe you could inspect it with hexdump to see if there is an obvious problem, like all zeroes.

I tried gzipping /dev/urandom at my end. With dd, I made a 1GB file, and gzip cannot compress it; the gz file is slightly larger (by about 170KB) than the original file. (I used gzip with the default settings, no user options.)

In any case, /dev/urandom is rather slow for wiping drives, but here is an easy trick to accelerate it. If it is acceptable to repeat a large block of pseudo-random numbers, you can write a block of /dev/urandom to the hard drive and then duplicate it like so:
Code:
# Example using 8MB block, with hard drive at /dev/sdx:

# First we write one block:
dd if=/dev/urandom of=/dev/sdx bs=8M count=1

# Then copy that block to the end of the available space:
dd if=/dev/sdx of=/dev/sdx bs=8M seek=1
You don't need a RAM drive because operating system will normally keep the 8MB (or even somewhat larger) block in a RAM buffer. I find that this method runs as fast as clearing the drive with if=/dev/zero. It doesn't get much faster than that.
 
1 members found this post helpful.
Old 09-08-2015, 03:58 AM   #6
cilbuper
Member
 
Registered: Mar 2008
Posts: 141

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Beryllos View Post
You saw 1000:1 compression? I think your /dev/urandom is broken! (What distro is that?) Maybe you could inspect it with hexdump to see if there is an obvious problem, like all zeroes.

I tried gzipping /dev/urandom at my end. With dd, I made a 1GB file, and gzip cannot compress it; the gz file is slightly larger (by about 170KB) than the original file. (I used gzip with the default settings, no user options.)

In any case, /dev/urandom is rather slow for wiping drives, but here is an easy trick to accelerate it. If it is acceptable to repeat a large block of pseudo-random numbers, you can write a block of /dev/urandom to the hard drive and then duplicate it like so:
Code:
# Example using 8MB block, with hard drive at /dev/sdx:

# First we write one block:
dd if=/dev/urandom of=/dev/sdx bs=8M count=1

# Then copy that block to the end of the available space:
dd if=/dev/sdx of=/dev/sdx bs=8M seek=1
You don't need a RAM drive because operating system will normally keep the 8MB (or even somewhat larger) block in a RAM buffer. I find that this method runs as fast as clearing the drive with if=/dev/zero. It doesn't get much faster than that.
Ubuntu 14.10. I compressed it again and got the same 8MB file. I'll see if I can open the .img file in an editor and see what the result is. I did a cat and it does repeat a lot but I don't think I'm seeing the whole picture...
 
Old 09-08-2015, 01:05 PM   #7
Beryllos
Member
 
Registered: Apr 2013
Location: Massachusetts
Distribution: Debian
Posts: 529

Rep: Reputation: 319Reputation: 319Reputation: 319Reputation: 319
Oh yeah, you compressed an image. It might help if you tell us what commands you used to wipe the drive, create the image, and compress the image, and how large the image file was. I wonder if the wiped partition was correctly filled with random data but the imaging method reduced it, for example by ignoring unused space.
 
Old 09-12-2015, 05:24 AM   #8
cilbuper
Member
 
Registered: Mar 2008
Posts: 141

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Beryllos View Post
Oh yeah, you compressed an image. It might help if you tell us what commands you used to wipe the drive, create the image, and compress the image, and how large the image file was. I wonder if the wiped partition was correctly filled with random data but the imaging method reduced it, for example by ignoring unused space.
Sorry, I'm not a linux expert yet, opnly been really using it for 1 year.

To wipe the drive (8gb SSD drive partition), I unmounted it and ran:
Code:
dd if=/dev/urandom of=/dev/sdXXX bs=1M
I then used gparted to select all the area on the drive (sda4) and created an ext4 partition b/c dd wouldn't recognize the area without doing this. I than created an image by the following:
Code:
dd if=/dev/sda4 of=/home/user/backup_images/8gb_urandom_wipe.img
The resulting .img file was 8.0GB

I then opened Dolphin, selected the file and compressed it with each of the 3 choices and they all come out to 7.9MB and about 20-30 seconds to process.

I'm not really sure about what you wrote that I bolded. The 8gb partition was wiped with random data and then a dd bit-for-bit copy was made giving me an 8GB .img file. Now if the urandom created a lot of empty space of like 1 million repetitions of the same 800KB random block, then that may be what was going on.
 
Old 09-12-2015, 11:54 AM   #9
Beryllos
Member
 
Registered: Apr 2013
Location: Massachusetts
Distribution: Debian
Posts: 529

Rep: Reputation: 319Reputation: 319Reputation: 319Reputation: 319
Quote:
Originally Posted by cilbuper View Post
Sorry, I'm not a linux expert yet, opnly been really using it for 1 year.
Okay, no worries.

Quote:
Originally Posted by cilbuper View Post
To wipe the drive (8gb SSD drive partition), I unmounted it and ran:
Code:
dd if=/dev/urandom of=/dev/sdXXX bs=1M
Do you mean of=/dev/sda4? I expect that would wipe it.

Quote:
Originally Posted by cilbuper View Post
I then used gparted to select all the area on the drive (sda4) and created an ext4 partition b/c dd wouldn't recognize the area without doing this. I than created an image by the following:
Code:
dd if=/dev/sda4 of=/home/user/backup_images/8gb_urandom_wipe.img
If you execute that dd command as superuser (also known as root), the partition should be recognized by dd regardless of the existence of a valid file system or whether the partition is formatted or mounted.

Anyway, I suspect that gparted overwrote the random data when it created the file system. It may have an option for testing or clearing the partition prior to creating the basic framework of the file system. (A "quick format," on the other hand, would only lay down the framework but leave the unused data blocks alone.)

Quote:
Originally Posted by cilbuper View Post
I'm not really sure about what you wrote that I bolded.
Now that I have a better understanding of your method, I think it was gparted that cleared the random data, not the imaging method which was dd. If gparted zeroed the blocks, then dd would simply package it, and the compression routine would squeeze out most of the zeroes.

See if you can omit the gparted step. Then I expect the image will have all the randomness that /dev/urandom put there, it will be incompressible, and you can use the uncompressed image to wipe other drives or partitions, if that's what you require.

Last edited by Beryllos; 09-12-2015 at 11:56 AM.
 
Old 09-15-2015, 02:23 PM   #10
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
With that many drives that need to be wiped, just wipe with '/dev/zero'. The only recovery methods available to reverse this are simply not practical. Unless you have mission-critical data that needs to be wiped, just zero the drives. Not to mention that just repeating a small block is no harder to recover than zeroing the drive.
 
  


Reply

Tags
random



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
"/Users/Shared/H&R Block" on an OS/X system?! "Hell, no!" But ... sundialsvcs Linux - Security 1 02-14-2015 04:13 AM
[SOLVED] Cant hear sound from "cat /dev/urandom > /dev/audio" hd_pulse Linux - General 6 06-03-2011 10:06 AM
[SOLVED] Stupidly ran "cat /dev/urandom > /dev/mem", worried I broke firmware crosstalk Linux - Hardware 2 10-25-2010 05:27 PM
Startup Hang after "Using /etc/random-seed to initialize /dev/urandom" DotMatrix Slackware 5 02-26-2010 12:34 AM
What is adsp0? and why cat /dev/urandom >/dev/adsp0 give me "no such device"? haimeltjnfg Linux - Hardware 2 05-09-2004 10:54 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 06:13 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration