Hacker News new | comments | show | ask | jobs | submit login
Encrypt a file so that only a SSL site's private key can decrypt (dpaste.de)
55 points by bigiain 1700 days ago | 43 comments

While it is a nice idea, I do not see any practical applications.

It requires a receiver who is a) knowledgeable enough to decrypt it using something like openssl and b) have access to the actual private key.

If that's the case it is highly likely that this person is either capable of using something like PGP or facilitate a file upload form on the apparently available website. Both of which are at least as secure and a whole lot more convenient.

Something related to this I would be really interested in would be a way to encrypt a file client-side, in the browser using javascript prior to uploading it (without the need for the user to do special tricks beyond picking a password). So far this only seems remotely possible using something like flash or silverlight.


Javascript is too hostile an environment for crypto. How do you know your js crypto code wasn't modified in transit? You use SSL. Now that you're using SSL why do you need encryption in the js at all?


Whoever thinks someone didn't change the code on the server might also be kidding himself...


I could see it being useful if you aren't sure how to get an encrypted message to some one.

Much encryption can be done in javascript.





So it appears that this site has the perceived advantage of turning a web developer's SSL cert into key for encryption, which at first sounds like a good idea.

If on the other hand you don't exclusively communicate with SSL-using web developers, you're better off using a PGP implementation. Unfortunately Symantec bought PGP Corporation, but GPG4Win[1] is free, as are GPG-based PGP implementations for almost every other platform.

Worried about it being hard? It really isn't. In fact it's easy to get up and running with encrypting your Gmail in Firefox[2] even if you have a Mac or something else[3].

[1] - http://www.gpg4win.org/

[2] - http://www.instructables.com/id/Send-and-Receive-Encrypted-E...

[3] - http://www.instructables.com/id/Encrypt-your-Gmail-Email/


Here's what he's using it for: http://vulnarb.com/

It's not for communicating with web developers. It's for publicly and provably, but responsibly exposing vulnerabilities in software. The people/entities who write such software are likely to have a public website with SSL. If they don't, it's unlikely to be very important or widely used software. In that context, PGP makes no sense.


Can you point me to a site that discusses the best practices for generating and securing my keypair? I don't have enough faith in an Instructable to get all the details right. How would I go about becoming part of this "web of trust" that I keep hearing about in relation to PGP.


The GNU Privacy Handbook does a pretty good job of blending both the how and why of everything.


The biggest thing to worry about is that you only want the private key on systems you trust. If you put your private key on a USB stick, and use the local library or computer lab, you've already lost the battle. If you're running a totally infected Win95 machine, you've already lost the battle.

Second biggest thing is to make sure you properly generate a revocation certificate, and a backup, and store them in a location you consider secure. (And maybe that secure location is just a shoebox in your bedroom closet unless you're worried about the NSA or something.) Then if you realize you've done something stupid, you can just revoke the key and create a new one.

Other than that, there's not much to screw up if you follow the default settings when creating a key with gpg.

For email, I would also highly recommend using a local MUA that connects to gmail. Most people use Thunderbird + Enigmail, but there are other options. Enigmail also has a pretty good manual that covers both the how and why.


The various gpg-related mailing lists are also pretty friendly. They're low-traffic enough that people are always happy to answer basic questions; no RTFM replies.


While there are Internet services that purport to realize an actual "web of trust", the "web of trust" concept is really notional. It's more an idea than a thing. The idea is, instead of having a central authority that vouches for every key, you get your keys directly from your acquaintances, verify them directly with your acquaintances, and in turn vouch for those keys with your peers.


If you use the Firefox Gmail extension, watch out for the autosave drafts feature of Gmail, which will put a bunch of copies of your unfinished unencrypted email on Google's servers.


Since the RSA algorithm for public/private key algorithm can only encrypt payloads as large as the (public)key, this is the only way to safely encrypt files.

That's why PHP provides the openssl_seal function can do exactly the same (and more) as the given code. (http://php.net/manual/en/function.openssl-seal.php)

When you Google for it there are also implementations for other languages available.


It's not the RSA key size that prevents you from using RSA for arbitrary message lengths. Notice that block ciphers used for bulk encryption are similarly constrained by their block length (which are far shorter than an RSA key). In both RSA and AES, you can use a chaining construction to encrypt arbitrary-length messages.

The things that keep you from doing this in the real world with RSA are security and speed.

It's first of all much slower to perform a single RSA operation than it is to perform the AES block transform. AES involves no bignum math at all, let alone bignum modular multiplication.

Secondly and more importantly, RSA encryption is fundamentally volatile and dangerous. As an exercise, go implement it in Python or Ruby (you're going to find it's remarkably easy, since both those languages automatically promote to bignums). RSA is just a simple formula. As a result, there are a variety of pitfalls to using it safely. Among the important ones is the fact that you can't safely encrypt related messages, and that messages require a certain proportion of random padding.




A script without context or explanation of what it is useful for. Why did you submit this?


That's a neat hack. I once had call to send banking details to a client. It was very annoying - eventually settled on encrypting an emailed zip file and sending password out of band.

This, on the other hand, turns any web developer's SSL cert into their PGP key without their advance cooperation. (They don't have to have one, understand why they need one, or create one and publish the public key. They just have to have an https site, like all my clients already do.) Limited utility, since decrypting is impossible for regular people and larger corps would have that private key locked down like crazy, but a very neat hack. I could actually see myself using it, too, for secure geek-2-geek transmissions.


Friends of mine have created http://lockify.com/ exactly for the purpose you outline with the banking details.


Zed Shaw wrote this (https://twitter.com/#!/zedshaw/status/54434747652390912) and then him and Dan Kaminski had a good talk on Twitter about it.


Great I think that got most of it (validation!) but revocation checking worries me and a skim of the OpenSSL (0.9.8o) sources doesn't leave me with the warm'n'fuzzies.

s_client.c calls SSL_CTX_set_verify() (the default verifier). Results from that can be obtained from SSL_get_verify_result() and are documented in verify(1).

All of the CRL/revocation-related return codes there are marked "unused". There is no mention of OCSP.

I found found a "crl_check/crl_check_all" option for verify(1). Command line help mentions an "ocsphelper". OpenSSL does have a separate OCSP client. But I don't think any of this machinery is activated by default.


I thought it was a fabulous (tho limited utility) hack around the key distribution problem.


I suggest you patch the call to rm with a call to srm:



Better to mount /dev/ram0 and do your encryption there. Then shred; rm -rf; umount for the win.


...or at least use shred(1) and then rm(1)


Six of one, half a dozen of the other?

Also note that there are interesting complications in relation to file data long-term storage on SSD drives. It should be investigated.


Or journaling file systems.


Why don't you use the standard openssl RSA encryption function to encrypt the entire file, rather then encrypt a plaintext passphrase?

I don't know of an implementation that uses RSA encryption that doesn't use RSA to encrypt a (heavily padded, very random) key which they then use to encrypt the final payload using say AES or IDEA (in the original PGP).


Speed, because cryptanalysis via cipher-text only attacks becomes easier as you get more and more ciphertext associated with a given key, and because if someone attempts to analyse a memory core dump or the memory space of your computer hopefully the only data available is the session key, rather than the full decrypted RSA private key.






That makes no sense.

You create some random key K.

You encrypt k using the public key of the recipient, ie. e(k).

You encrypt the message using K.

You send both of those to the recipient. A cyphertext only attack can recover K from your message M. It is not then possible to recover a private key from K.

In neither case do you ever have the private key, as such it cannot ever be recovered from a core dump. In this script it seems like they are simply doing this twice, for some unknown reason.


Sorry, there seems to have been a misunderstanding; I completely agree with you. I thought you were asking "Why bother using sessions keys, rather than encrypt the whole message using RSA?". My bad.


You got downvoted, and maybe I've misinterpreted the thread, but my perception was:

* Parent commenter thinks messages should just use RSA, and not RSA+AES.

* You try to explain why he should use RSA+AES instead of RSA.

* He tries to post an analysis of why to use RSA-only.

Can I just step in to say: (a) using RSA only is way slower, like you said, and (b) it is significantly harder to make bulk RSA encryption secure than it is to make bulk AES encryption secure, just like you said?


It goes without saying that you shouldn't do a simple "rm" on the password file. I'm not going to put on my tin-foil hat and start gibbering about magnetic alignment and electron tunnel microscopy but this is just "one of those things" that everyone knows to avoid.


Instead of 'echo "QUIT" | openssl ...', why not 'openssl ... < /dev/null'?


Using /dev/urandom as password source is a very bad idea.


Using /dev/urandom as a password source is fine. It's a CSPRNG. It theoretically degrades if you exhaust entropy, but there's no current attack I know of based on that property. Also, RNG attacks are usually "online", meaning an attacker gets to continually interact with the RNG. This is a one-off offline use. In this scenario, you could probably survive with rand().


Agreed. /dev/urandom should be mixed with other sources of entropy (system statistics, epoch, low-level counters, cryptographic PRNGs like Yarrow) and then "combined" using a cryptographic hash. Such a principle is used in e.g. Fortuna.




Or just use /dev/random :)


Reads from /dev/random will block when the entropy pool is empty. You can see the number of bits of entropy available on a Linux systems via:

$ cat /proc/sys/kernel/random/entropy_avail

If you need more, better randomness, check out the Entropy Key:



Blocking /dev/random when entropy is low is the correct behaviour, but it is a system-dependent behaviour. Darwin (Mac OSX) has the two sources behave identically.

The Darwin man page justifies this behaviour saying:

     /dev/urandom is a compatibility nod to Linux. On Linux, /dev/urandom will produce lower quality output if the
     entropy pool drains, while /dev/random will prefer to block and wait for additional entropy to be collected.  With
     Yarrow, this choice and distinction is not necessary, and the two devices behave identically. You may use either.
and then contradicts itself later by saying:

    Yarrow is a fairly resilient algorithm, and is believed to be resistant to non-root.  The quality of its output is
    however dependent on regular addition of appropriate entropy.


Care to explain why?


A counterpart to /dev/random is /dev/urandom ("unlocked"/non-blocking random source) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. While it is still intended as a pseudorandom number generator suitable for most cryptographic purposes, it is not recommended for the generation of long-term cryptographic keys.



On virtual machines, /dev/urandom contains very little if any entropy.

Basically /dev/random takes entropy from the system and feeds it to you.

/dev/urandom is a psudorandom number generator that reseeds from entropy as it gets it. Ie. if it has no entropy, your random numbers are anything but random.


This is a drastic oversimplification. Both urandom and random (on Linux; there's no difference between the two on BSD) are seeded from hard entropy sources. Both urandom and random extract entropy by updating pools with SHA1. The difference is that random has an estimator and will demand more hard entropy when it has serviced too many requests. But it's not as if urandom goes from producing "101010100101000101010100111001" to "111011011110111101111111110111" when entropy is depleted.

In any case, this is entirely irrelevant to the discussion at hand. You can absolutely use /dev/urandom to make a one-shot crypto key. You shouldn't wire /dev/urandom up into an online cryptosystem (don't use it to produce DH parameters, for instance), but even then, urandom isn't going to be how your system really gets broken.

In your case, experimenting with encrypting whole files with RSA instead of using RSA to exchange keys is what's really going to break your system. This is almost a decent example of how people obsess over the wrong things in cryptosystem design, and why perhaps generalist programmers should stay far, far away from this stuff.


"and why perhaps generalist programmers should stay far, far away from this stuff."

Could I adjust that to say "generalist programmers should stay at least enough in touch with this stuff to know how badly they'll screw it up on their own"?

I've had _many_ heated discussions with inexperienced devs who don't understand just how much you need to know (and how much you need to know that you don't know) before you can start ignoring the simple advice "SSL for data on the move, GPG for data at rest".


Virtual machines receive very little entropy from their environment, which is a real problem when entropy is required for the generation of cryptographic keys.

There have been many attacks based upon vulnerabilities which exist due to misunderstandings entropy, and the need for a secure random number generator, for example the mozilla ssl vulnerability and the debian ssh key vulnerability.

I would agree with you that /dev/urandom can be used for one shot passwords, however I would disagree with you that getting in to the habit of using a non secure random number generator as a source of secure entropy is a bad idea and should be discouraged.

I'd also like to point out that "the standard openssl RSA encryption function" last time I checked worked to spec, and does in fact encrypt a symetric key used for AES (By default), using RSA, including proper cryptographic padding of the key using PKCS#1.

I'm not exactly sure why you thought otherwise.

I do agree with your final assertion, though. Unless you know what you're doing, it's very easy to make a mistake.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact