Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I just realized that it's much faster to zero fill a drive after it's been formatted. Don't know what the reason is – maybe file system enhancements or alignment ...
So I used gparted (0.7) to format the whole drive ext4 and create a big file like this:
Still: Why is /dev/urandom only 5 MB/s and /dev/random only about 10B/s!?
The random number generator harvests "entropy" (randomness) from environmental sources like the keyboard and mouse, and maintains an "entropy pool." (I suspect the randomness is in the timing of keyboard, mouse, and perhaps other activity, not the content.) The entropy pool can be used as a source of random numbers, but because of the limited data rate of the keyboard/mouse/environment sources, the entropy pool can be emptied rather quickly.
/dev/random will provide random numbers only as long as there is sufficient entropy in the pool. When the pool is depleted, /dev/random will pause until more entropy is gathered from the environment.
On the other hand, /dev/urandom uses a pseudo-random number generator algorithm to provide any amount of output at a high rate limited mainly by your CPU speed.
Try "man random" (or "man urandom") to see the official help files.
P.S. I get 90-100MB/s for dd transfer from /dev/zero, 4.5MB/s from /dev/urandom using the default blocksize of 512 bytes, and 8.1MB/s from /dev/urandom with bs=64k or bs=1M.
Last edited by Beryllos; 05-10-2013 at 01:21 AM.
Reason: added my typical speeds for comparison
Alright, seems it drops again to about 100 MB/s at the end – at least on a 2TB drive.
I'm still looking for a way to create several random files in a loop. Something like:
while true; do sudo dd if=/dev/urandom of=/media/DRIVE/file.$RANDOM bs=4k count=$RANDOM; done
This will create files from 2-120 MB but can only be aborted with Ctrl+Z.
Still: Why is /dev/urandom only 5 MB/s and /dev/random only about 10B/s!?
/dev/random and /dev/urandom need randomness (entropy). The system only stores enough randomness to create a certain amount of random data.
/dev/random is high-quality entropy. It will block after the entropy pool is exhausted. /dev/urandom will deliver random data that exactly repeats itself if you ask for more than the entropy pool can supply in one shot.
To get an idea, try:
dd if=/dev/random | hexdump -C
and dd if=/dev/urandom | hexdump -C
and watch how the data output behaves.
But, for your application, /dev/urandom is permissible without increased security risk. If you ever encrypt a partition, before you do anything else, fill it with random data.
Otherwise the encrypted data is vulnerable to crypto attacks using the encryption algorithm itself.
If you do it the proper way, it's impossible to tell what algorithm was used to make the volume.
Oh, and lest I forget, beware of disk wipe methods that save time. Try this if you will:
Take a drive and wipe it with the manufacturer's zero-out, or low-level format program. Time the process with your own clock, not the program-display time
Then, boot with a live CD and search:
dd if=/dev/sda | strings -n 8 | less
nd check for data. Then wipe the drive with the first utility, again. Compare the total time to the first wipe. Why is that?
Last edited by AwesomeMachine; 05-10-2013 at 04:42 AM.
Beryllos, I think this was more for me. But I can see the difference...
Okay, I should have mentioned that I put in the "bs" and "count" parameters, and I was examining whether "/dev/urandom will deliver random data that exactly repeats itself if you ask for more than the entropy pool can supply in one shot." I did not see any repetition.
Here's an example of what I did:
dd if=/dev/urandom bs=256M count=1 | hexdump -v -e '1/8 "%16.16x"' -e '"\n"' | sort | uniq -dc | sort -n | wc -l
1+0 records in
1+0 records out
268435456 bytes (268 MB) copied, 200.523 s, 1.3 MB/s
This gets a 256MB block of data from /dev/urandom, divides it into 8-byte words, and looks for duplicates. The 0 (zero) at the end indicates that there were no duplicates within the 32M consecutive 8-byte words which were sampled. This is not the most thorough test one can envision, but it tells us at least that /dev/urandom is not very repetitive.
I would like to explore the cryptographic strength of /dev/urandom someday. Probably the best way to do that is to look at the source code and see whether the programmers selected an established, proven, cryptographic pseudo-random number generator. Cryptographers know how to do it right.
I tried that, and do not see anything suspicious. What do you see?
It's more the speed and rhythm that should be observed. /dev/urandom does not repeat very often. You can't see it repeat. It repeats maybe every 1024 bytes. But if the entire drive is overwritten, it doesn't matter.
Overwriting with random data is really for encrypted volumes or files. If you want to wipe a drive you can use zeroes.
If you wipe a drive twice, the second pass takes much longer. That tells you the first pass doesn't wipe everything.
/dev/random is for use in long-term keys that can't easily be changed, and that someone might have years of time to crack.
If I were to recommend the best way to secure against cryptographic exploits, it would be to hash the salt value for 30 seconds.
Every passphrase attempted would then require not 0.0001 seconds but 30.0 seconds, raising the time required by 6 orders of magnitude, and triple that.
If it took 20 days normally to crack a passphrase, hashing the salt value for thirty seconds would effectively increase that to 6.0 million days.
But the logical extrapolation of that method is to make the data unusable, which is 100% secure.
The federal government uses idle cpu cycles on 110,000 clustered PCs to crack all but the most meticulously crafted encryption.
Seems fast at first glance ...
You need cryptsetup and modprobe dm-crypt for that. (Source)
Best way to write random data to a drive quickly would be to create a luks container then write zeros to the container. When you do this it will write random data to the drive as it translates it down to the real drive from the container. It seams to work quicker then writing directly from urandom and its a pretty good way to be sure the data is scrambled.
Also on the note of security if you have a SSD writing zeros to the drive sometimes will not clear the drive of data. The firmware for the drive will see that there is no data being sent and will cache it in a way that will preserve the data. It does this for drive leveling. With flash media and non spinning disks the best practice is to physically destroy the device.
Testing at the Center for Magnetic Storage Systems found that Solid-State drives cannot be securely wiped by any known means. They are disposable items. Exvor is correct that it is due to the wear-leveling-management controller in the SSD drive electronics. The same holds true for all solid-state flash-type storage devices. Thankfully, practically no one sells used USB drives.
We're not talking tiny bits of data left on the drives, like one sector here and there. We're talking hundreds of megabytes that will survive even repeated wipes by the most sophisticated drive sterilization software. This means that SSD storage cannot be used for any type of forensic duplication, so the entire computer-forensics industry is still very dependent on mechanical drives, if forensic analysis is to have any legitimacy at all.
It also means that no means exist to securely wipe SSDs for any purpose, not the least of which would be to avoid possible political persecution at the hands of tyrannical regimes.
Most people probably hot-link to this discussion thread off a Google.com hit, so you don't see that the OP has reached over a million views. Sorry, but I am not able to update the OP, because the edit button disappears after a certain time duration. But it's pretty good the way it is. I have no plans to fork it. It wouldn't be the same if it was anywhere but linuxquestions.org. I'm pleased if the OP has been of use to others. That's what was intended. I hope many more people will benefit. Knowledge is power! Truth will set you free. I give the power to the people.
But I take a lot of heat for it! There are at least a few people who are not too happy with me for putting together the OP, simply because it puts the power in the hands of the common folk. My attitude is: if someone is talented, that person will try to be of some benefit to others, and he won't worry about it. It's those who lack motivation and talent who must keep knowledge hidden, because if everyone knows, they'll do for themselves what others would charge money to do for them.
I learned a great lesson from writing the OP: if you ignore obstacles they become stepping stones toward new heights.