shared key vs signed certificate in one use case
I'm looking for any good reason not to use a shared key for a specific secured data transfer case. So far I don't find any good one. But at least one person I have described this to seems overly fearful of a shared key. I just think they are reacting to the fact that a shared key is not suitable in the general cases, but have not analyzed how it would apply in this case I describe below:
The use case is a one time data transfer. The scenario is a user logged in to two separate remote computers via a slow connection using ssh. There is a need to transfer a large volume of data from one remote host to the other. These remote hosts are located apart from each other requiring use of the internet to reach each other directly. The ssh connections might be as slow as mobile 2G or even dial-up. The data to be transfered between the remote hosts might be a terabyte in size. You simply would not even think of transferring it down one ssh connection and up the other. So a direct connection is clearly essential, and it must be secured via encryption. Preventing others from seeing the data is the primary concern, but undetected alterations are also a concern. The data transfer needs to be spontaneous without any complex setup to be performed, such as starting an ssh daemon. Use of root permissions is unavailable. Also, creating cross signed certificates is not practical. I believe in this scenario, a script that sets up a networked pipeline on these two remote machines, utilizing an encryption cipher like aes-256-cbc available in a tool like openssl would be suitable. The user would provide a randomly chosen string to each remote machine where it is safe to do so, where the openssl program ends up getting that key. The data would be compressed and encrypted on the source machine, transmitted over a TCP connection to the destination machine, where it would be decrypted and uncompressed. The question: Is there any reason that the use of a shared secret key between these remote hosts is not appropriate? Is there any reason to justify requiring the greater complexity of setting up a public-key SSL/TLS certificate for this case? |
Your entire scenario depends on the security of that key, and getting it from place to place. Maybe the SSL protocol is just what you really want. This protocol can negotiate a secure communication with anyone who wants to connect. The two parties can then authenticate themselves any way they wish over the now-secure connection.
|
Quote:
Quote:
The sender needs to make sure only the authorized receiver will be able to get the data in the clear. The receiver needs to make sure only data from the authorized sender is being received. We are familiar with how that happens with SSL through use of signed server certificates and client certificates (though clients certificates are rarely used ... login passwords in an assumed MitM-safe channel are used). But in my scenario, the SSL setup is not in place. If the data were of a small volume, a reasonably secure scenario can be set up through SSH ... although this requires anticipation by the person that started the two SSH sessions. The SSH login to the sender host could have been Code:
ssh -R localhost:12345:localhost:12345 user@remote-send Code:
ssh -L localhost:12345:localhost:12345 user@remote-recv Code:
socat tcp4-listen:12345 stdout | tar xpfz - Code:
tar cfz - data | socat stdin tcp4:localhost:12345 Now this is fine for cases where the data is not too small and the connections to the remotes from the client are of sufficient bandwidth. But the scenario I explained is one where the above SSH pipelining/forwarding scheme puts too much burden on the SSH connections from the client. So I want to go direct. And I want to do it in as simple a way as possible. However, it is OK to have a script implemented to carry this out and ready to go on each remote host. Such a script is something that can be shared among the community and easily put anywhere ahead of time. However, a certificate setup for SSL would not be so easily done on the fly. The scenario I am suggesting is a script that can be given the cipher key on the command line or in response to a prompt. The user at the client machine can simply make up a random key and type it in on each host she is connected to when prompted (or generate something stronger and copy/paste it in). The trust relationship between the two hosts is establish by the user typing/pasting in this key, and only for the duration of time using this key. What I see is that the above is AS GOOD AS any other trust relationship, particularly because it is a one time event and not depending on a key that is BOTH shared AND stored. I have a little toy/tool that can make the keys for me (originally for making passwords, but it will go to an extreme if asked to): Code:
lorentz/phil /home/phil 31> mkpw 72 Code:
lorentz/phil /home/phil 32> uuidgen Can you show how the scripts executed on the two remote hosts would establish trust in a simple way so the user only needs to run one command on each end? I do not think this is possible. |
You're proposal is essentially the classic establish a secure channel (usually using asymmetric keys, here via ssh which may also use asymmetric keys) and distribute a shared session key over it. SSL does something similar.
Quote:
|
Quote:
Quote:
The client machine that runs ssh to the two remotes? If I do that at the client, it will need to pass the same key to ALL the hosts (there might be hundreds, but the transfer only needs to be done by two of them, while another has been broken into). Some mechanism for the script on the remotes to communicate back to the client and address the other remote specifically and do a key exchange that way could make more sense. It would be stronger than just a random shared string. And it could now be on the time scale as the cipher in SSL (which ultimately is a shared key of rather short duration). I could have SSH always set up a reverse channel back to a client based "key trader" server/agent. But if I don't have that mechanism? Just how weak is it to have the user just think up a pass phrase and use that ... once? |
Quote:
Code:
keyname=key.$(date).$random_id Quote:
|
Quote:
Here is what I've put together so far around this idea: http://wiki.slashusr.net/documentation/scripts/cpionet http://wiki.slashusr.net/documentation/scripts/tarnet What I am thinking of changing is to have one end generate a random password and output that with instructions to copy it to the other machine. The listen side needs to be run first, so that's where it would be generated. |
Quote:
Quote:
Quote:
|
Quote:
Quote:
|
Quote:
Quote:
|
Please bear in mind that the SSL/TLS protocol does not have to be "open." It does support certificate-based client authentication.
It's also not the only secure-tunnel protocol out there. VPN of course is another. SSH tunneling is a distant third. In any case, you want to use an existing, well-understood industry standard tunneling protocol, which is easily supported by your hardware. You don't want to roll your own. Once you have established the secure connection, the two sides must still authenticate with one another. Then, they can exchange information through whatever methods you please, all through the secure tunnel which neither of them has to secure further. I invariably and strongly encourage a certificate-based approach, and here's an easy example why. When you go into your workplace, there's no one there saying, "say the magic word." No, you swipe your badge. You can't copy the badge. If you lose it, or get fired or whatever, that specific badge immediately is made useless. No one else is inconvenienced. The company can specify exactly what that unique badge is good for, and can easily change that profile at any time. Password-protected certificates are strong additional protection because they are based on strong encryption of the key contents. But you still have to possess a valid, one-of-a-kind key, in addition to having the means to unlock it. |
Quote:
Quote:
Quote:
Quote:
|
All times are GMT -5. The time now is 09:42 PM. |