LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > General
User Name
Password
General This forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!

Notices


Reply
  Search this Thread
Old 02-26-2015, 04:20 PM   #1
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Paper: Surreptitiously Weakening Cryptographic Systems by Bruce Schneier et. al.


A recent paper by Bruce Schneier and Matthew Fredrikson and Tadayoshi Kohno and Thomas Ristenpart.
http://eprint.iacr.org/2015/097

It is recommended reading for those who are concerned with cryptographic sabotage by gov agencies and other entities. The layman should be able to understand most of it, especially the important parts. It is basically a summary and categorization of recent cryptographic security compromises.

I read it and have the following conclusions and questions.

Conclusions:

1) Being FLOSS does NOT greatly increase the detectability of code sabotage. Major security flaws (Heartbleed, Debian PRNG, GNU TLS, etc) in FLOSS have remained undetected for years before being detected and fixed.
2) Backdoors with high control (Lotus Notes, Dual EC PRNG) as used by the NSA are not generally exploitable by the public, and therefore may be more acceptable, assuming that they are implemented properly and that obtaining the key to the backdoor is infeasable for adversaries.

Does this mean you should be using closed-source software and/or let the NSA backdoor your programs ? Absolutely not...

3) Kleptography and subliminal channels also have high control, can be used by the NSA, and would only be detectable in a FLOSS program.

The paper also recommends open-source code so that it can be inspected.

Questions:

So why shouldn't the NSA backdoor encryption ?

The high control of backdoors relies on the adversary not being able to obtain the key to the backdoor, and also proper implementation. If the adversary is say China and/or Russia, could spies not steal the key and then be able to totally compromise nearly all security programs using the backdoor ?

Given enough computing power, could they not simply focus their efforts on deriving the key and thus totally compromising nearly all security programs using the backdoor ?

Is the benefit/cost ratio of the scope of such a backdoor high enough to make it preferable to a more targeted/precise approach (spying on select targets instead of everyone everywhere) ? A more targeted approach would only compromise the security of a small number of individuals (the targets).

Do you trust the NSA with all this information about everyone ? Are they morally infallible / will they not use the data for other purposes ? Do they even need all this information about everyone, does it help them accomplish some goal, and precisely what is that goal ?

I think until all these questions are thoroughly answered, the NSA should not be allowed to backdoor encryption or otherwise compromise it.
 
Old 02-27-2015, 08:03 AM   #2
rtmistler
Moderator
 
Registered: Mar 2011
Location: USA
Distribution: MINT Debian, Angstrom, SUSE, Ubuntu, Debian
Posts: 9,882
Blog Entries: 13

Rep: Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930
Interesting article and discussion points.

IMHO the only true way to secure information is via physical means. One can cipher their information as much as possible however if an intending party gets hold of the information in whatever form, they can and will do as much as they can to decode and view the information.

There are issues though related to the use of information: users and their use case. Users can be government, commercial, or personal/private. Use cases can be very varied and therefore can also affect the state-ful-ness of the information. For instance in a battle, "Your air support is coming in now from the East" is highly urgent to both the requesting party and obviously the opposition. Meanwhile a multi-million dollar R&D effort is similarly important to the developer and owner of the information, but also potentially very important to a competitor; just the commercial case is different than a battlefield.

That's where this all becomes one of several problems, because use and deployment of protected information is required, much like having money is not the point of an economy but rather the act of using money makes the economy operate.

I have no proposed solutions myself for these issues.

Makes sense to me that "as a rule or law" the NSA should not be given these allowances until some better guidance has been determined.

This all makes me wonder a bit though. Yes, I'm smart, but I'm not a certified genius. Further, besides having used and understand the principals of encryption systems, having lived through and dealt with the fallout from the Walker incident, and having participated in the design of central telephony switches which required law enforcement electronic surveillance capabilities, my thinking is that with the explosion of encryption methods over the years, I'd have to take time to read up on them. But merely reading up on them would give me an overview and further, there are no guarantees that the source is available.

So where does that lead me if I choose to use encryption? I download something and use it, right? How would I know if the author put in a trapdoor for their own use, let alone due to the NSA?

To me that last one is an even bigger question.
 
Old 02-27-2015, 11:48 AM   #3
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Original Poster
Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Here is a concrete example of how NSA backdoors can backfire:
https://www.techdirt.com/articles/20...ier-list.shtml

Backdoors are indeed difficult to detect, but in the case of the Dual EC DBRG, it was first detected in 2006:
http://web.archive.org/web/201406210...tymatters_1115
it was only confirmed and deprecated recently (almost 10 years later).

IMO, stay away from anything the NSA touches, they can't help themselves given the opportunity. This is one reason why I don't trust AES or SHA1/SHA2. I know cryptographers recommend (even in this paper) only using algorithms with nothing-up-the-sleeve numbers e.g. selected in a demonstrable manner. Constants that are selected at apparent random may be a sign of sabotage. In some cases you can use your own pseudorandom constants even tho this may weaken the crypto to some extent. Constants are supposed to be chosen so that the cipher/hash has maximum diffusion. For example in the Whirlpool hash (based on AES) the initial constants were pseudorandom, they were later changed to strengthen the cipher and make it easier to implement in hardware (not necessarily a good thing).

For hardware implementation, take a look at the case of DES. It is possible to brute force DES not only because of the short key size (56-bits), but also because of fast hardware implementation.
https://en.wikipedia.org/wiki/Data_E...e_force_attack
Thus, if a cipher/hash can be implemented efficiently in hardware, a brute force attack would be easier on it than on a cipher/hash of of the same key size but not efficiently implemented in hardware.

To choose a cipher/hash, one thing you can look at is the safety margin, which is the number of rounds the function has divided by the number of rounds that have been broken in cryptanalysis. For example:
Quote:
Skein is secure. Its conservative design is based on the Threefish block cipher. Our current best attack on Threefish-512 is on 25 of 72 rounds, for a safety factor of 2.9. For comparison, at a similar stage in the standardization process, the AES encryption algorithm had an attack on 6 of 10 rounds, for a safety factor of only 1.7. Additionally, Skein has a number of provably secure properties, greatly increasing confidence in the algorithm.
https://www.schneier.com/skein.html
72/25 = 2.88
10/6 = 1.66

Block size, key size, and computation complexity of proposed attacks are also important. It is also important that a cipher/hash be crypt-analyzed a lot. Ripemd-160 and Whirlpool are examples of hashes that have not received much cryptanalysis (only one study each that I could find). This doesn't mean they are bad, but it doesn't vouch for them either.

EDIT:
Another idea to consider:
Everyone Wants You To Have Security, But Not from Them
https://www.schneier.com/blog/archiv...ne_wants_.html
The only reasonable solution is to keep your own data safe, from everyone but yourself ... the way it was meant to be.

Last edited by metaschima; 02-27-2015 at 11:56 AM.
 
Old 02-27-2015, 07:15 PM   #4
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,643
Blog Entries: 4

Rep: Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933
Personally, cryptographically protected information does not concern me nearly so much as the vast amounts of totally unprotected information that can now be collected fairly-effortlessly about millions of people at once. (For example, the exact geographic location of virtually every man, woman, and child ... well ... anywhere. 24 x 7 x 365, now for many years running. And the practical ability to analyze and thus to exploit that data.) This is not something that's held only by the guv'mint: it's held by corporations large and small througout the planet. And it is stored ... who knows where ... and accessible to ... who knows. In his wildest 1984 fantasy, George Orwell could never have dreamed of this.
 
Old 02-28-2015, 01:26 PM   #5
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Original Poster
Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
I hope the NSA doesn't try for another Clipper chip:
http://threatpost.com/nsa-could-be-h...p-redux/111233

which, of course was a total fail:
https://en.wikipedia.org/wiki/Clipper_chip

or I hope that if they do try, things will end the same way, with it failing.
 
Old 03-26-2015, 04:40 PM   #6
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Original Poster
Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Bill Introduced To Repeal Patriot Act And Prevent The Government From Demanding Encryption Backdoors
https://www.techdirt.com/articles/20...ackdoors.shtml

Relevant, because although extremely unlikely to pass it would prevent cryptographic sabotage by the gov't.
 
Old 03-29-2015, 10:17 AM   #7
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,643
Blog Entries: 4

Rep: Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933
A government agency with billions of secret-dollars to spend can find its way through an encryption system ... even by bludgeon-ways such as literally brute-forcing it.

Or, they can do it by "giving you an offer you can't refuse."

In the real world, cryptographic systems, regardless of their theoretical security, are used by ... people. Over communications channels that aren't perfect. Among people who might betray them for the right amount of money. (There is no honor among thieves ...) Any number of reasons.

And yet, also in the real world, cryptographic systems are ordinarily used for very mundane reasons: to make it "less than 'trivially easy'" to exploit a communication, either by intercepting it or by forging it. As Mr. Zimmerman said, "because it's nobody's business but yours." The mere fact that we send letters through the post "in an envelope" is a very important security feature, even though the envelopes can be steamed-open or ripped. A friend of mine kept an expensive guitar in a cardboard case with a tiny, tiny padlock on it: "to keep the honest people out."
 
Old 03-31-2015, 06:10 PM   #8
smeezekitty
Senior Member
 
Registered: Sep 2009
Location: Washington U.S.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339

Rep: Reputation: 231Reputation: 231Reputation: 231
Quote:
The mere fact that we send letters through the post "in an envelope" is a very important security feature, even though the envelopes can be steamed-open or ripped.
Or in some cases it can be read just by using a strong light source through it without opening it at all.
 
Old 04-01-2015, 08:53 AM   #9
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,643
Blog Entries: 4

Rep: Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933Reputation: 3933
Quote:
Originally Posted by smeezekitty View Post
Or in some cases it can be read just by using a strong light source through it without opening it at all.
Yes, but because the envelope exists, you are obliged to do that extra step. It is no longer "trivially easy" (heh, so far as we know ...) to read and store the content of every message that passes through that Post Office.

Today, gigabytes of information pass through the system every day, with no impediments whatsoever to its wholesale "data mining." And people are mining it. ("You don't know who, you don't know where, you don't know what, and it never disappears.")

And most importantly, you never 'gave consent' to any of this practice. Face it, you never read any EULAs or Privacy Policies: you click "I Accept" because it's the only thing you can do. None of these things have ever been tested under the law. Your actual understanding ... the one that guides actual behavior ... is actually an implicit one. "Email" is ... "Mail." "Forum posts" are ... "Conversation." "Access to my location" is ... "purposeful, and limited to that purpose, and temporal."

We think that "we can do all of this stuff, and get away with it," because (at the moment at least) we are technically able to do it and there are no laws telling us, "No, here are the boundaries."

We're playing with something, the likes of which have never existed before, in all of human history. We're ignoring the risks because we've simply persuaded ourselves that the risks are theoretical or even nonsensical.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Software Monoculture (by Bruce Schneier) win32sux Linux - Security 1 12-06-2010 04:07 PM
Bruce Schneier warns 'profits killing personal privacy' Jeebizz Linux - News 0 10-12-2010 09:46 AM
LXer: Bruce Almighty: Schneier preaches security to Linux faithful LXer Syndicated Linux News 0 12-28-2007 06:10 PM
LXer: Bruce Schneier to speak at Linux.conf.au LXer Syndicated Linux News 0 12-07-2007 02:40 PM
weakening ethernel signal? bruj3w Linux - Networking 5 01-25-2005 10:57 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > General

All times are GMT -5. The time now is 06:52 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration