Paper: Surreptitiously Weakening Cryptographic Systems by Bruce Schneier et. al.
GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Paper: Surreptitiously Weakening Cryptographic Systems by Bruce Schneier et. al.
A recent paper by Bruce Schneier and Matthew Fredrikson and Tadayoshi Kohno and Thomas Ristenpart. http://eprint.iacr.org/2015/097
It is recommended reading for those who are concerned with cryptographic sabotage by gov agencies and other entities. The layman should be able to understand most of it, especially the important parts. It is basically a summary and categorization of recent cryptographic security compromises.
I read it and have the following conclusions and questions.
Conclusions:
1) Being FLOSS does NOT greatly increase the detectability of code sabotage. Major security flaws (Heartbleed, Debian PRNG, GNU TLS, etc) in FLOSS have remained undetected for years before being detected and fixed.
2) Backdoors with high control (Lotus Notes, Dual EC PRNG) as used by the NSA are not generally exploitable by the public, and therefore may be more acceptable, assuming that they are implemented properly and that obtaining the key to the backdoor is infeasable for adversaries.
Does this mean you should be using closed-source software and/or let the NSA backdoor your programs ? Absolutely not...
3) Kleptography and subliminal channels also have high control, can be used by the NSA, and would only be detectable in a FLOSS program.
The paper also recommends open-source code so that it can be inspected.
Questions:
So why shouldn't the NSA backdoor encryption ?
The high control of backdoors relies on the adversary not being able to obtain the key to the backdoor, and also proper implementation. If the adversary is say China and/or Russia, could spies not steal the key and then be able to totally compromise nearly all security programs using the backdoor ?
Given enough computing power, could they not simply focus their efforts on deriving the key and thus totally compromising nearly all security programs using the backdoor ?
Is the benefit/cost ratio of the scope of such a backdoor high enough to make it preferable to a more targeted/precise approach (spying on select targets instead of everyone everywhere) ? A more targeted approach would only compromise the security of a small number of individuals (the targets).
Do you trust the NSA with all this information about everyone ? Are they morally infallible / will they not use the data for other purposes ? Do they even need all this information about everyone, does it help them accomplish some goal, and precisely what is that goal ?
I think until all these questions are thoroughly answered, the NSA should not be allowed to backdoor encryption or otherwise compromise it.
IMHO the only true way to secure information is via physical means. One can cipher their information as much as possible however if an intending party gets hold of the information in whatever form, they can and will do as much as they can to decode and view the information.
There are issues though related to the use of information: users and their use case. Users can be government, commercial, or personal/private. Use cases can be very varied and therefore can also affect the state-ful-ness of the information. For instance in a battle, "Your air support is coming in now from the East" is highly urgent to both the requesting party and obviously the opposition. Meanwhile a multi-million dollar R&D effort is similarly important to the developer and owner of the information, but also potentially very important to a competitor; just the commercial case is different than a battlefield.
That's where this all becomes one of several problems, because use and deployment of protected information is required, much like having money is not the point of an economy but rather the act of using money makes the economy operate.
I have no proposed solutions myself for these issues.
Makes sense to me that "as a rule or law" the NSA should not be given these allowances until some better guidance has been determined.
This all makes me wonder a bit though. Yes, I'm smart, but I'm not a certified genius. Further, besides having used and understand the principals of encryption systems, having lived through and dealt with the fallout from the Walker incident, and having participated in the design of central telephony switches which required law enforcement electronic surveillance capabilities, my thinking is that with the explosion of encryption methods over the years, I'd have to take time to read up on them. But merely reading up on them would give me an overview and further, there are no guarantees that the source is available.
So where does that lead me if I choose to use encryption? I download something and use it, right? How would I know if the author put in a trapdoor for their own use, let alone due to the NSA?
Backdoors are indeed difficult to detect, but in the case of the Dual EC DBRG, it was first detected in 2006: http://web.archive.org/web/201406210...tymatters_1115
it was only confirmed and deprecated recently (almost 10 years later).
IMO, stay away from anything the NSA touches, they can't help themselves given the opportunity. This is one reason why I don't trust AES or SHA1/SHA2. I know cryptographers recommend (even in this paper) only using algorithms with nothing-up-the-sleeve numbers e.g. selected in a demonstrable manner. Constants that are selected at apparent random may be a sign of sabotage. In some cases you can use your own pseudorandom constants even tho this may weaken the crypto to some extent. Constants are supposed to be chosen so that the cipher/hash has maximum diffusion. For example in the Whirlpool hash (based on AES) the initial constants were pseudorandom, they were later changed to strengthen the cipher and make it easier to implement in hardware (not necessarily a good thing).
For hardware implementation, take a look at the case of DES. It is possible to brute force DES not only because of the short key size (56-bits), but also because of fast hardware implementation. https://en.wikipedia.org/wiki/Data_E...e_force_attack
Thus, if a cipher/hash can be implemented efficiently in hardware, a brute force attack would be easier on it than on a cipher/hash of of the same key size but not efficiently implemented in hardware.
To choose a cipher/hash, one thing you can look at is the safety margin, which is the number of rounds the function has divided by the number of rounds that have been broken in cryptanalysis. For example:
Quote:
Skein is secure. Its conservative design is based on the Threefish block cipher. Our current best attack on Threefish-512 is on 25 of 72 rounds, for a safety factor of 2.9. For comparison, at a similar stage in the standardization process, the AES encryption algorithm had an attack on 6 of 10 rounds, for a safety factor of only 1.7. Additionally, Skein has a number of provably secure properties, greatly increasing confidence in the algorithm.
Block size, key size, and computation complexity of proposed attacks are also important. It is also important that a cipher/hash be crypt-analyzed a lot. Ripemd-160 and Whirlpool are examples of hashes that have not received much cryptanalysis (only one study each that I could find). This doesn't mean they are bad, but it doesn't vouch for them either.
EDIT:
Another idea to consider:
Everyone Wants You To Have Security, But Not from Them https://www.schneier.com/blog/archiv...ne_wants_.html
The only reasonable solution is to keep your own data safe, from everyone but yourself ... the way it was meant to be.
Last edited by metaschima; 02-27-2015 at 11:56 AM.
Personally, cryptographically protected information does not concern me nearly so much as the vast amounts of totally unprotected information that can now be collected fairly-effortlessly about millions of people at once. (For example, the exact geographic location of virtually every man, woman, and child ... well ... anywhere. 24 x 7 x 365, now for many years running. And the practical ability to analyze and thus to exploit that data.) This is not something that's held only by the guv'mint: it's held by corporations large and small througout the planet. And it is stored ... who knows where ... and accessible to ... who knows. In his wildest 1984 fantasy, George Orwell could never have dreamed of this.
A government agency with billions of secret-dollars to spend can find its way through an encryption system ... even by bludgeon-ways such as literally brute-forcing it.
In the real world, cryptographic systems, regardless of their theoretical security, are used by ... people. Over communications channels that aren't perfect. Among people who might betray them for the right amount of money. (There is no honor among thieves ...) Any number of reasons.
And yet, also in the real world, cryptographic systems are ordinarily used for very mundane reasons: to make it "less than 'trivially easy'" to exploit a communication, either by intercepting it or by forging it. As Mr. Zimmerman said, "because it's nobody's business but yours." The mere fact that we send letters through the post "in an envelope" is a very important security feature, even though the envelopes can be steamed-open or ripped. A friend of mine kept an expensive guitar in a cardboard case with a tiny, tiny padlock on it: "to keep the honest people out."
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
The mere fact that we send letters through the post "in an envelope" is a very important security feature, even though the envelopes can be steamed-open or ripped.
Or in some cases it can be read just by using a strong light source through it without opening it at all.
Or in some cases it can be read just by using a strong light source through it without opening it at all.
Yes, but because the envelope exists, you are obliged to do that extra step. It is no longer "trivially easy" (heh, so far as we know ...) to read and store the content of everymessage that passes through that Post Office.
Today, gigabytes of information pass through the system every day, with no impediments whatsoever to its wholesale "data mining." And people aremining it. ("You don't know who, you don't know where, you don't know what, and it never disappears.")
And most importantly, you never 'gave consent' to any of this practice. Face it, you never read any EULAs or Privacy Policies: you click "I Accept" because it's the only thing you can do. None of these things have ever been tested under the law. Your actual understanding ... the one that guides actual behavior ... is actually an implicit one. "Email" is ... "Mail." "Forum posts" are ... "Conversation." "Access to my location" is ... "purposeful, and limited to that purpose, and temporal."
We think that "we can do all of this stuff, and get away with it," because (at the moment at least) we are technically able to do it and there are no laws telling us, "No, here are the boundaries."
We're playing with something, the likes of which have never existed before, in all of human history. We're ignoring the risks because we've simply persuaded ourselves that the risks are theoretical or even nonsensical.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.