LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 10-15-2008, 07:41 AM   #1
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Rep: Reputation: 19
Gathering entropy


I do not have a true Random Number Generator (RNG), but I don't think I need one. Could someone please confirm if this scenario is safe.

I have tons of JPG and CR2 (Camera RAW) images taken with my Canon D400 over the last few years. I estimate the number of images to be 10.000 or more.

These kind of data could, besides being nice pictures, double as a entropy feed, right?

Let's say I do the following:
  • Pick a random image file
  • Strip the header and footer
  • Encrypt it with gpg
  • Encrypt it once again with my own home-made encrypter
  • Run it as a feed to rngd

I could even compress it with bzip2.

Or perhaps even better: Divide it into chunks of 1024 byte and run each chunk through sha, where the sha-sum is the entropy. There no shortage of data, I have over 100 Gigabytes (saving every CR2 and JPG). And if that's not enough I have probably the same amount of movies on my hard drives. I just figured my own pictures would be safer than a DVD in order to make entropy. :-)

Now, if you ask me, this is perfect entropy and nobody would ever be able to calculate my random data.

Am I correct?
 
Old 10-15-2008, 04:49 PM   #2
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Original Poster
Rep: Reputation: 19
Ok, answering to my own post. :-)

In order to recreate my entropy, a hacker (or NSA?) need my original picture, my private gpg key, and know the algorithm in use.

So, let's say I sneak out with my camera and fill the memory card with pictures. Then I make my entropy as described and burn the memory card afterwards. Of course, I will use a brand new gpg key, stored on the very same card to insure it will get destroyed. My "home brewed" permutation algorithm will also get destroyed with the memory card.

Now, if this isn't safe I'll eat my hat. ;-)
 
Old 10-15-2008, 04:59 PM   #3
jailbait
LQ Guru
 
Registered: Feb 2003
Location: Virginia, USA
Distribution: Debian 12
Posts: 8,337

Rep: Reputation: 548Reputation: 548Reputation: 548Reputation: 548Reputation: 548Reputation: 548
In the past I have created random numbers on computers using the same concept that you are using. That type of process works far better than pseudo random tables. I would suggest that you also add Linux's urandom to your entropy collection. See:

man urandom

---------------------
Steve Stites
 
Old 10-15-2008, 05:20 PM   #4
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Original Poster
Rep: Reputation: 19
Quote:
Originally Posted by jailbait View Post
In the past I have created random numbers on computers using the same concept that you are using. That type of process works far better than pseudo random tables. I would suggest that you also add Linux's urandom to your entropy collection. See:

man urandom

---------------------
Steve Stites
In fact, that's what I do! For some reason I do not trust my random numbers so I feed them to rngd (part of gkernel) while reading /dev/urandom (go figure!)

Now, that may be very stupid, and perhaps feeding random numbers through rngd will give me the very same random numbers in /dev/urandom (if I don't drain the entropy).

Anyway, I hope Linux will give me that extra randomness by using some (but not all of it) random data fed through rngd

Oh, and I'm using rngd because it will filter out data that to not pass the FIPS 140-2 test. Sounds really cool.

Last edited by robel; 10-15-2008 at 05:24 PM.
 
Old 10-16-2008, 02:41 AM   #5
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Original Poster
Rep: Reputation: 19
Being a network administrator at my local ISP I have access to our main router. Of course it is a Linux router and that gives me access to delicious entropy. There are over 2000 users and the traffic is peaking 200 Mbps during prime time. So I figured it's a good idea to tap into eth0 on that router and collect entropy.

Code:
tcpdump -n -s 1514 -i eth0 -c 1000000 -w test.dat greater 1500
There is a small amount of good entropy in the IP- and TCP-headers as well, but to avoid my data to be biased I just collect big packets. Anyway, I will compress, encrypt (gpg and my own algorithm) and split it into chunks and run through sha.
 
Old 10-16-2008, 09:45 AM   #6
slimm609
Member
 
Registered: May 2007
Location: Chas, SC
Distribution: slackware, gentoo, fedora, LFS, sidewinder G2, solaris, FreeBSD, RHEL, SUSE, Backtrack
Posts: 430

Rep: Reputation: 67
Hook up a microphone and just have it listen all the time and feed that into the pool. it will get room noise(aka silence), talking, music, etc. Or put the mic out the window in a plastic cup for the echo of nature
 
Old 10-16-2008, 09:49 AM   #7
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Original Poster
Rep: Reputation: 19
Quote:
Originally Posted by slimm609 View Post
Hook up a microphone and just have it listen all the time and feed that into the pool. it will get room noise(aka silence), talking, music, etc. Or put the mic out the window in a plastic cup for the echo of nature
Yes, that's an option. Should take action to unbias it though. sha/md5 should do the trick.

Another nice entropy pool is the noise of an untuned FM-receiver.
 
Old 10-16-2008, 10:33 AM   #8
slimm609
Member
 
Registered: May 2007
Location: Chas, SC
Distribution: slackware, gentoo, fedora, LFS, sidewinder G2, solaris, FreeBSD, RHEL, SUSE, Backtrack
Posts: 430

Rep: Reputation: 67
Quote:
Originally Posted by robel View Post
Yes, that's an option. Should take action to unbias it though. sha/md5 should do the trick.

Another nice entropy pool is the noise of an untuned FM-receiver.
I think 2 tuned FM-receivers might be better because they are constantly changing whats on. With a untuned reciever you will get some similarities in the noise.
 
Old 10-16-2008, 10:53 AM   #9
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Original Poster
Rep: Reputation: 19
Quote:
Originally Posted by slimm609 View Post
I think 2 tuned FM-receivers might be better because they are constantly changing whats on. With a untuned reciever you will get some similarities in the noise.
You do? I thought an untuned FM-receiver was getting nothing but white noise, which is extremely unpredicted. White noise contains, by definition, all frequencies within a fixed bandwidth. Hence it would be a perfect RNG.

I think I'll stick to white noise rather than a tuned FM-station. If NSA knows which station I'm tuned on they can, in theory, duplicate my entropy. Very theoretical of course.
 
Old 10-16-2008, 02:08 PM   #10
slimm609
Member
 
Registered: May 2007
Location: Chas, SC
Distribution: slackware, gentoo, fedora, LFS, sidewinder G2, solaris, FreeBSD, RHEL, SUSE, Backtrack
Posts: 430

Rep: Reputation: 67
Quote:
Originally Posted by robel View Post
You do? I thought an untuned FM-receiver was getting nothing but white noise, which is extremely unpredicted. White noise contains, by definition, all frequencies within a fixed bandwidth. Hence it would be a perfect RNG.

I think I'll stick to white noise rather than a tuned FM-station. If NSA knows which station I'm tuned on they can, in theory, duplicate my entropy. Very theoretical of course.
true. Maybe 1 tuned and 1 untuned would be a better solution feeding into a single audio input would be a good one also.

White noise is very unpredictible but it also doesnt vary as much as a tuned station would. white noise is mostly in a certain freq. range.
 
Old 10-16-2008, 03:32 PM   #11
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
One of the characteristics you want is an even distribution over time over possible values, no matter how you look at the data. For example, taking a 48bit number if you use the first byte as the x coordinate, the second as y and the third as z, the data should be evenly distributed on a 3d plot.

If you come up with your own source, you should use a testing suite that determines the randomness quality of your samples. Any regularity could cause problems with encryption. Statistical applications depend on even distribution.
 
Old 10-16-2008, 04:14 PM   #12
cronologic
LQ Newbie
 
Registered: Dec 2007
Posts: 2

Rep: Reputation: 0
how to determine entropy?

To obtain new encryption keys, I have in the past downloaded some raw binary data from the internet (e.g. such as hotbits.org) and a site that claims the entropy is quantum in nature (e.g. radioactive decay).

Anyhow, I wanted to measure the entropy in these files I've been building as well as regular files on my system. Many crackers can find keys on your system by simply measuring the entropy in files or portions of files so I wanted to perform a similar experiement with my own files.

I've had a difficult time finding entropy algorithms on the internet, but I managed to put together a PHP script that measures the Shannon Entropy (also known as "Information Entropy") of a file (before I integrate this into one of my C++ all purpose number tools). While this script does its intended purpose, the Shannon algorithm I've read is too generic to give a good measure of the entropy of a binary file.

Basically the algorithm is this:
-loop through all bytes, building a table of the counts
of each occurrence of a particular byte (0..255)
-now, loop through the table, ignoring any nonoccuring bytes
and process each byte with associated count as follows:
probability = occurrence_count/size_of_data
entropy += -1 * (probability * log2(probability))

I've gathered from what I've read that there is a better algorithm for determining enropy of raw binary data, I just can't seem to find any information on it. Does anyone have any pointers?

FYI: If anyone is interested, I can post this .PHP script.
 
Old 10-17-2008, 03:57 AM   #13
ledow
Member
 
Registered: Apr 2005
Location: UK
Distribution: Slackware 13.0
Posts: 241

Rep: Reputation: 34
As a mathematician, I'm dubious about some people's understanding of entropy and randomness here. There is a reason that the RNG in Linux is designed and tweaked only by people with a real understanding of mathematics and randomness. I don't pretend to be an expert in it but there are a lot of assertions being thrown about that mean absolutely nothing.

The Shannon algorithm, for example, doesn't "tell you how much entropy you have" but instead tells you how much data is extractable from a known-random entropy source. Just reading through the code for it should tell you that. Feed garbage in and you get garbage out - in this case, if your data isn't truly random, the Shannon entropy doesn't tell you "how much entropy" is in that data. The same applies to the person who said "Oh, and I'm using rngd because it will filter out data that to not pass the FIPS 140-2 test.". FIPS 140-2 is not something that a software program can certify you for, hence why there are labs that do the certification. True FIPS 140-2 certification involves determining the physical source and entropy of every input and every possible way of affecting that input (temperature of the computer system, etc.).

There's no magic bullet that can say "your data is truly random". No analysis of random data can do that - only analysis of the methods used to generate it. You can do a bit of statistical analysis to check the data you get isn't too stupid but nothing software can say "this is random". I could "fake" statistically-perfect random numbers by picking numbers off my head and then jumbling them and then playing with some padding numbers to enhance their statistical properties. It doesn't meant that the numbers I was thinking of were random.

Also, the problem with feeding lots of JPG (or any other filetype) data into a random seed is that JPG's have a strict structure (even without the header/footer) which affects the "randomness" of the data. This structure is determined by *possible values* for each particular byte (which, I believe, in JPEG is still present in the data because later data are sometimes dependent on the values of earlier data). Any limit on the possible values is a theoretical weakness in an entropy source. Hashing based on those limited values is only of small benefit and any decent RNG will probably not use a random source directly anyway, unless it's deliberately assuming the user is smart enough that they are ensuring the data is statistically "even" as well as random.

You might use a photo of some white noise (which, as pointed out, is a term that usually refers to truly random data) but it will still be constrained into the format that the JPEG specifies. This means that there will be predictable elements within that data. Inserting predictable elements into random data is what all random and psuedo-random number generators avoid like the plague - this is why the "non-random" parts are stripped out BEFORE being used as a seed. Even running them through a hashing algorithm doesn't help here and you are introducing a weakness.

Now, before everyone shouts, this weakness is theoretical but present. The military would be up in arms about it being in their random/psuedo-random number generators but most people wouldn't care. But for somebody who takes the time to introduce entropy, a lot of you have fallen into classic mistakes that the people who provided /dev/random for you were very careful to avoid.

It's quite possible that the vast amounts of entropy being gathered from truly random portions of the files far outweigh the predictable elements, but without real mathematical analysis, you can't be sure. This is how PRNG's are broken - finding a tiny, yet real, predictable element (not necessarily "bit X is 1", but even "there is a 0.001% probablity that every 100th entropy bit was tending more towards a 1 than a 0") and analysing the hell out of it in the hope that it causes a cascade that reduces the *real* randomness, thus providing a crack into which to insert a supercomputer crowbar.

Now, gathering quotes from this thread, consider the "non-entropy" (predicitability) introduced by the following methods:

"Pick a random image file" (image file being the critical term here)
"I could even compress it with bzip2."
"And if that's not enough I have probably the same amount of movies on my hard drives."

Now, on a practical scale, we have quotes like this:

"Now, if you ask me, this is perfect entropy and nobody would ever be able to calculate my random data."

It isn't perfect entropy. In fact, it's quite poor entropy backed only by the fact that the sheer scale of the data produces sufficient entropy *compared to the predictability it introduces*. It's *probably* true (in fact, highly probable) "that nobody would ever be able to calculate my random data." However, it's NOT certain. More likely, it's because nobody will ever have to because your data just isn't that important to those people who would have the capacity to do the sort of analysis/computation necessary.

It's like inventing your own language - you could easily invent a language that nobody ever learns to understand (there was a case with two girls who spoke their own private language from birth that nobody else, including a lot of linguistic analysts, could understand). But there is no way that you could say that nobody COULD have learned to understand it.

"In order to recreate my entropy, a hacker (or NSA?) need my original picture, my private gpg key, and know the algorithm in use." "If NSA knows which station I'm tuned on they can, in theory, duplicate my entropy."

Why do they need to recreate your entropy? That's a rubbish way of attacking an encrypted data source. What they do is find *weaknesses* in the entropy used without even knowing what it was. This is why it's VERY important to strip all predictable elements from an entropy source (in fact, it's not really an entropy source unless you do that - it's just a data source). This is why great care is taken to not just throw, say, the last time you clicked a mouse or the last time a computer interrupt fired into an entropy pool but, instead, to extract *just those bits* that are on a scale which makes it unpredicatable (e.g. the nanosecond the event occurred). Introducing two bits of "true" entropy (the ns value) rather than several thousand of "semi-predictable" entropy (the number of nanoseconds since the epoch, which provides several dozen bits of "predictable" data, plus a few bits of "unpredictable" data) is superior here.

I'm not even going to entertain questions of "My "home brewed" permutation algorithm" because the chances that it does anything to enhance entropy are zero. Just because some code does "something weird" on some data, doesn't mean it does ANYTHING to enhance the entropy of that data. If you think this "protects" your random entropy or your encrypted data, you're almost certainly wrong. Analysis of encrypted files doesn't care WHAT you done to them, it's what patterns are left afterwards. Grabbing some properly encrypted file and jumbling the bytes about does nothing to protect the file from being cracked and doesn't affect any of the statistical measures and analysis used to break them.

In fact, I would posit that anything "homebrew" that keeps the size of the data the same does nothing to increase the entropy (in fact, I believe that may be impossible) and in fact will likely greatly decrease it to the point where it would introduce a weakness in the final encrypted data.

"Now, if this isn't safe I'll eat my hat. ;-)"

Safe in practical terms? Possibly. Safe in theory? No. Safer than just using a true random source? Nowhere near it. Safe enough that if were a terrorist and a large enough government had a real interest in decrypting it that they couldn't? Almost certainly not. Hats are easier to eat if you start with the rim, by the way.

The same problems occur again and again in this thread:

"So I figured it's a good idea to tap into eth0 on that router and collect entropy. tcpdump -n -s 1514 -i eth0 -c 1000000 -w test.dat greater 1500 There is a small amount of good entropy in the IP- and TCP-headers as well, but to avoid my data to be biased I just collect big packets."

And again the entropy is collected incorrectly and deterministic elements are not removed. TCP headers, for instance, are virtually deterministic, apart from possibly "source port" (dubious), initial SEQ number (dubious again - even with properly designed PRNG determining such things) and timestamps (better, but only if you use the highest-resolution bits which I believe would already be used by Linux as part of its standard entropy pool - if they are not, it's for a reason).

According to what I just googled, there are over 160 bits of data in a typical TCP header (some of them are dependent on optional components etc.). We *might* get one or two bits of truly good entropy out of a single packet if we're lucky (and if we consider it in isolation to other packets that arrive, which is an assertion which would require a lot of analysis to determine if it's correct). So if we just blindly use it as an entropy source, we've introduced, say, 155 bits of highly predictable data (note that I mean by structure, sequence and protocol, not necessarily that we know the IP's and sequence numbers directly) and got maybe 5 bits of true entropy along with it. Multiply that up by a couple of gigs over the course of a day - you've just compromised your RNG.

Now, there have been some good suggestions, too.

"Hook up a microphone and just have it listen all the time and feed that into the pool. it will get room noise(aka silence), talking, music, etc. Or put the mic out the window in a plastic cup for the echo of nature."

A microphone is, treated properly by something like the audio entropy daemon available, a good source of good (not perfect) entropy. It will respond to quite a lot of truly (or highly) random events such as movement of air (subject to Brownian motion, local air currents etc.). Tuning it to a channel of "static" is good too, although there's always the possibility that there is a pattern in that frequency range that the human ear cannot detect (ever listened to the "sound" of encrypted data, for example? It sounds exactly like static). There's also crossover with actual radio stations even if they are inaudible as well as harmonics of other frequencies (ever wondered why you can sometimes get "ghost" channels of existing broadcasts on your old analogue TV at frequencies way outside the normal range for your country? It's not being picked up from elsewhere, it's hitting a harmonic of the channel, or picking up frequency-shifted echoes from buildings/land/air/sea etc.)

But this method is extremely likely to introduce much more entropy than predictability.

"I think 2 tuned FM-receivers might be better because they are constantly changing whats on. With a untuned reciever you will get some similarities in the noise."

With a tuned channel, however, you are probably going to introduce more predictability than randomness. Human speech, music, silence, they all have a pattern. Even "layered" over, or compared against, one another 2 tuned channels would not be *as good* an entropy source. Though the exact circumstances of such things are best answered by radio engineers, physicists and mathematicians who have studied it.

Out of all the methods mentioned, tuning a TV or radio to a static channel and passing it through the video/audio entropy daemons is one source that I would personally consider if I thought that the Linux RNG wasn't good enough for me. And that's in second place only to a proper nuclear-decay based RNG with a good design. Shortly behind that is the suggestion of using properly-generated quantum numbers off a website (and only because it was off a website - if I were working in that lab, that would be number one, obviously).

Throwing what you *think* is random data into an entropy pool is a really bad idea. Chances are, however, that for the majority of circumstances it won't matter anyway. But if you think you're "enhancing" your random streams by this, then you're almost certainly wrong. Check out the LKML (or other mailing lists for BSD etc.) for discussions on entropy - a lot of crazy ideas are thrown out for the above reasons.

On Linux, cat /dev/random can soon block on any machine (because it "knows" it's run out of true entropy) and takes a long while to regenerate any significant amount. This is because out of the billions (if not trillions) of bits flying through the computer every second (from the CPU's RNG, the timing of internal and external peripherals, network data, random flucutations in internal bus timing, human-derived input etc.) only a tiny handful are actually useful as entropy.

You're not alone in this, it's an extremely difficult area to understand and humans being are completely pathetic at estimating, or even describing, randomness and probability which doesn't help.

Moving offtopic slightly, and to quote my own post from a year or so ago:

Next time you do the lottery (a significantly random event), try to estimate the chances of two consecutive numbers coming up. Then see how many draws there have been where two consecutive numbers appear. Or look up something like the Monty Hall Problem. Those two examples should teach you just how bad you or any human is at judging (and therefore understanding) randomness or simple probability.
 
Old 10-17-2008, 05:37 AM   #14
win32sux
LQ Guru
 
Registered: Jul 2003
Location: Los Angeles
Distribution: Ubuntu
Posts: 9,870

Rep: Reputation: 380Reputation: 380Reputation: 380Reputation: 380
Quote:
Originally Posted by ledow View Post
Next time you do the lottery (a significantly random event), try to estimate the chances of two consecutive numbers coming up. Then see how many draws there have been where two consecutive numbers appear.
If I properly remember the relevant chapter from my statistics class, I would say that the probabilities of that are exactly the same as the probabilities of another number coming up, because they are independent events. Perhaps this question is trickier than it seems, though, like the Monty Hall Problem you mention below.

Quote:
Or look up something like the Monty Hall Problem. Those two examples should teach you just how bad you or any human is at judging (and therefore understanding) randomness or simple probability.
I remember this problem really clearly. I also remember how I almost went nuts trying to wrap my brain around it.

BTW, thank you very much for taking the time to contribute such a thorough educational post. Great stuff! =)

Last edited by win32sux; 10-17-2008 at 05:39 AM.
 
Old 10-17-2008, 07:00 AM   #15
robel
Member
 
Registered: Oct 2008
Location: Norway
Distribution: Slackware
Posts: 77

Original Poster
Rep: Reputation: 19
Long post, selecting only parts of it this time.

Quote:
Originally Posted by ledow View Post
The same applies to the person who said "Oh, and I'm using rngd because it will filter out data that to not pass the FIPS 140-2 test.".
That was me.

I may not be a mathematician, but I can read man pages:

Quote:
Originally Posted by man rngtest
DESCRIPTION
rngtest works on blocks of 20000 bits at a time, using the FIPS 140-2 (errata of 2001-10-10) tests to verify the randomness of the block of data.
Quote:
Originally Posted by man rngd
DESCRIPTION
This daemon feeds data from a random number generator to the kernel's random number entropy pool, after first checking the data to ensure that it is properly random.
Maybe I was unclear when referring to FIPS 140-2, but my point was what's quoted in the man pages. If these man pages are BS, please don't blame me. When I read ...after first checking the data to ensure that it is properly random I believe it's a good thing.

Quote:
Originally Posted by ledow View Post
FIPS 140-2 is not something that a software program can certify you for, hence why there are labs that do the certification. True FIPS 140-2 certification involves determining the physical source and entropy of every input and every possible way of affecting that input (temperature of the computer system, etc.).
If I used the wrong phrase, I'm sorry. But somewhere in FIPS 140-2 there must be a randomness verification or else the author of rngd is on a far trip.

Quote:
Originally Posted by ledow View Post
There's no magic bullet that can say "your data is truly random". No analysis of random data can do that - only analysis of the methods used to generate it.
That means rngd and rngtest is just a piece of junk, hm? It's part of the gkernel by the way. Let's hope Jeff Garzik do not read this. Or did I miss the big picture somewhere?

Quote:
Originally Posted by ledow View Post
It isn't perfect entropy. In fact, it's quite poor entropy backed only by the fact that the sheer scale of the data produces sufficient entropy *compared to the predictability it introduces*.
You avoid commenting on the use of gpg. Isn't gpg scrambling the predictability? With a totally unknown (by the community) pair of keys that is.

Quote:
Originally Posted by ledow View Post
Why do they need to recreate your entropy? That's a rubbish way of attacking an encrypted data source.
Again, I'm sorry for being unclear. My whole point for gathering random data was to make a symmetric cipher with a key the same length as the pay-load. I failed to mention that in my previous post.

The idea is to collect 4 gigabytes of random data, burn it on a DVD, bring it personally to a friend and use it as a one-time key for symmetric cipher.

You can say a whole lot in 4GB.

Is it possible for someone to decrypt this kind of encrypted data without possessing the DVD? Of course it is, by luck:

1. Guess the key is exactly the same length as the message
2. Guess the key

Any other options?

Quote:
Originally Posted by ledow View Post
I'm not even going to entertain questions of "My "home brewed" permutation algorithm" because the chances that it does anything to enhance entropy are zero.
But as a key-maker it would be great, right?

I realize I should have pointed out that I intended to use the random data as a huge encryption key. I guess I mixed "good entropy" with "strong key". What I was searching for was a way to make a strong key. My first thought was to use random data.

Quote:
Originally Posted by ledow View Post
TCP headers, for instance, are virtually deterministic
Let's say I calculate a sha hash for every IP-packet with length = 1500 bytes. Can you tell me why the hash is bad entropy? No wait... Let me rephrase my question.

If I gave you two big chunks of random data (say one gigabyte), one made from a hardware RNG by utilizing the laws of quantum mechanics and one made by hashing IP-frames. Could you (or anyone at all) tell the difference? I mean, is it possible for someone to sort out what chunk is from a true RNG and what is not?

By the way you write I assume the answer is "it is not impossible". By as a mathematician you may be able to estimate how probable it is? Of course, you have 50% chance of getting it right by just guessing (that one is produced by a true RNG!).

Quote:
Originally Posted by ledow View Post
With a tuned channel, however, you are probably going to introduce more predictability than randomness.
Finally we agree!

Last edited by robel; 10-17-2008 at 07:02 AM. Reason: Typo
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Adding new sources to the Sabayon/Entropy repository? 449 Linux - Software 1 07-27-2008 03:09 PM
Entropy Generator for Debian Etch Asymmetry Linux - Hardware 3 11-21-2007 12:08 PM
Entropy generator in Debian 4.0r1 Asymmetry Linux - Security 5 11-16-2007 08:12 AM
Running out of entropy & /dev/random reads are blocking slacky Linux - General 1 06-15-2005 09:53 PM
entropy in /?? name_in_use450 *BSD 3 06-08-2004 08:10 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 05:18 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration