Hacker News new | comments | show | ask | jobs | submit login
Cryp.sr - a minimal host-proof cryptographic textpad (corte.si)
39 points by giu on Mar 29, 2010 | hide | past | web | favorite | 26 comments

I've not heard the phrase 'host-proof' before, but I wrote my thesis on trusted computing platforms (http://www.cs.auckland.ac.nz/~cthombor/Students/mbarrett/mba...). Some of the work I review in it was related to proving a remote host to be trust worthy, which seems to be of some relevance here.

I also informally discussed a number of features that a community would require so people could trust numerous hashes of compiled software.

Everything, it seems, ends up as a web app these days :)

It's not trusted computing. It's "encrypted on the client by code downloaded from the server before sending it back to the server".

An untrusted host seems like exactly the wrong case for web applications. What's to stop the host from adding in some key-logging Javascript? If it's cleverly hidden, or inserted only for certain IPs, chances are low that anybody would notice. Then the host would have your encrypted text and passphrase.

It would be better to have a desktop application, with source code the user can verify. Then no matter what the host changes on their end, I can know that only the encrypted text is being sent. Even better, host-based applications can tie into existing frameworks like GNOME Keyring and GnuPG, for even better security than can be provided by a 128-byte passphrase.

The point is that all hosts should be "untrusted", because no host is 100% secure, and no host is controlled by a 100% trustworthy entity. This fact is inconvenient, so it's pretty much ignored in the current generation of web applications. Host-proof app developers don't say that their hosts are somehow less _trustworthy_ - we recognize that all hosts, including our own, are untrustworthy, and try to design applications that take this into account.

It's early days, but I think that given published client-side source, enough users to provide vigorous peer review, and a verification mechanism like AppHash (which guards against exactly the kind of injection you talk about), it's possible to get it right.

I read that as: "current web app designs are bad, and this is no worse." But that assumes we can only choose your JS crypto or other bad web app designs, a false alternative. There is a third option.

What ever happened to all the other non-web-app crypto that has been doing much better for the past 20 years (PGP, SSL, /dev/urandom)? How does your solution compare to those?

These things are entirely orthogonal. The "host-proof" paradigm is specifically a web application design paradigm - the question we're asking is "how can we do web apps better". Unless you're able to convince your mom to interact with Facebook using PGP, it's just not relevant to this discussion.

As an aside, I would _love_ for browsers to expose a good, standardized set of crypto routines. They don't, so we do the only thing we can - implement them in Javascript. This makes me as queasy as anyone, but it's the best we can do at the moment.

Agreed. Web browsers are infamous for being vulnerable to remote attack - I want to keep my sensitive data away from them as much as possible.

This concept of "host-proof hosting" seems to encourage that we discard well-proven methods in favor of implementing new crypto schemes in the hostile browser-space.

Are web end-users really better equipped to ensure application security than web host administrators? </rhetorical>

There's no such thing as "host proof". There is software tamper resistance, which gives limited protection against the host and is often used for DRM. But this software does not use software tamper resistance. The author made up the term "host proof" to refer to the fact that the data is not stored on the client and is sent to the server encrypted. That's it.

This is Javascript "pgp -c" (passphrase-based encryption) but less secure.

With Javascript crypto, you implicitly trust the server. You are executing code downloaded from the server in order to encrypt your data to upload to the same server (or one run by the same entity if not the exact same box).

Instead of trusting the server to encrypt your data for you, you're trusting it to send you unmodified, non-buggy code to encrypt your data for you. Every time you connect to it. The one time it sends you a trojaned (or buggy) version of the code, your data is compromised.

I prefer to get my software once, audit it, and be sure I'm running the same thing every time. With Javascript crypto, every time you connect to the site is a crapshoot. Is it 0wned today? Maybe.

The comments section of a post I wrote has a discussion of Clipperz, a JS crypto library:


He actually talks about that in the post, if you read it. He's making a Firefox plugin to validate JS before it's run, it's a bit down near the end.

It's incorrect that there is no such thing as host-proof assuming the meaning intended by cryp.sr, but you would be correct in being put off by a false sense of security this provides for users of the web based version of the interface.

I believe that's the point of the client library at http://github.com/cortesi/crypsr_client. The only things you're asked to trust is that encryption algorithm is good and that the host will delete the encrypted data long before the processor in your smart phone is fast enough to decrypt it in 30 milliseconds. If the host returns bad data, it simply won't decrypt and it will be trash.

The real problem with this service is that it's intended to be an easy to use secure means of sharing information from one to many, but to make true secure use of it everyone needs to download a dedicated trusted client and the problem becomes no different than it has always been and only marginally more user friendly than hosting the data yourself.

Refusing to trust or rely on SSL by using javascript based encryption is something I've experimented with a bit in the past. Without this you are still relying on SSL which only protects the information in transport anyway. If the server is compromised, then at the least all of the information already stored on the server is secure as each key is not on the server at all. At that point you just have to worry about whether the algorithm has flaws or if the person gaining unauthorized access has a couple supercomputer farms and many years on their hands.

It is true that if you rely on the host to provide you with the encryption algorithm then it is open to future compromise, but the data couldn't be retroactively compromised until the moment it's accessed again via a modified page. Due to this, using cryp.sr via a browser is less desirable than using the open source client until there's a trusted plugin or greasemonkey script.

It'd be neat if there were signed or invariant web documents. Perhaps naming things as their own checksum? This way, if it were ever changed (...) the document name would change and the reference would break. The client could check at retrieval time if the checksum matches the label?

Uses whitebox encryption techniques? I don't think so, since he doesn't mention the term, and since he's using the jsCrypto library. I'm not sure one can implement whitebox encryption in a VM language without calling out to a DLL.

It's not using white box crypto. You can implement white box (painfully) in Javascript - white box is a transformation of the algorithm itself, not simply an implementation trick.

Are there Javascript obfuscators/transformers that implement this?

Not that I've seen, but again, the concept is algorithmic; nothing precludes it from being implemented in Javascript. The whole idea behind WBC is that you don't trust the implementation environment.

Clickable: http://cryp.sr

It seems that this could be vulnerable to the Evil Maid problem. Can someone that's more knowledgeable than myself shine some light on this?

Most software-based security systems assume that the hardware hasn't been maliciously tampered with. cryp.sr is apparently no exception.

If you suspect such attacks, the only defense seems to be adding more hardware. For example, you could use the root of trust from a TPM module or the added physical security provided by a lockbox.

interesting. so is something like abine host proof? http://www.getabine.com/ (i found them via here, but haven't used them). they seem to store the data on the client, but i guess that still requires that you trust their code, so it's not host proof? but then isn't that true of cryp.sr too?

The part from AppHash on covers verifying the downloading code matches the externally released / peer reviewed version.

He does seem to be trusting twitter however (and twitter isn't host-proof). Spreading out or mirroring the discussion could give you a high confidence but I think you'd need something like PGP's chain of trust to be provably secure.

oo, interesting - sorry, i didn't read that far.

even better (from my pov) would be something that worked similarly, but used used a local cache and told you when pages changed. i imagine it would always warn on first change and then flag pages that always change as "insecure", pages that have changed just now as "hack attempt" and pages that are constant as "original". you could imagine browser bar colours that signal those...

this is the same kind of approach i think should be used with SSL certification (indicating when the CA changes for a particular page), which would help secure against the recent discussion of govts pressuring CAs to sign fake certs (and obviously isn't new - it's basically what ssh does).

[edit: hmm, although what page is completely constant? when my bank balance changes i don't want an error flagged. no idea how you solve that in general]

AppHash tries to address this by having something called a "hostile block" in a page. The hash is checked for everything BUT the hostile block, and the hostile block itself is checked to make sure it only contains static variable assignments. Yes, this is a fiddly bit of code - I'd love for people to see if they could break it.

This is a really great idea. My only gripe is that it uses AES (due to that being the only cipher in jsCrypto).

Just curious, what's your objection to AES?

jSCrypto uses AES-128 only, a 10-round cipher with a small keysize, to which there are a number of side-channel attacks available. AES is also a very slow performer due to its computationally demanding nature, and still it does not supply additional benefits from the large amount of work it puts into scrambling data - a perfect example of another case of this would be the good old DES standards, with its measly 56-bit (7 bytes) keys, which are computationally more demanding than newer, safer ciphers, yet still so easily broken.

In comparison to this, a cipher that is both fast and also secure would be f.e. Blowfish, a 16-round cipher supporting 448-bit (56 bytes) keys, while still, on 448-bit keys, operating about 25% faster than the not quite as secure AES-128 does with its 128-bit (16 bytes) keys. Blowfish is entirely free of patents, whereas most of the cipher modes of AES are patented.

The second most important factor in a cipher's security is its keysize. If the cipher withstands all cryptoanalysis on its full amount of rounds, the last resort is brute force, and with brute force every extra bit of keysize matters to the feasibility and practical possibility of breaking the data. 128 bits of AES is today broken with modern parallel means in ridiculously short times (read: hours). Every extra bit of key theoretically (but not always practically) doubles the work required; a 129-bit key is twice as large as a 128-bit key; a 130-bit key is four times larger than a 128-bit key; a 448-bit key is 320 times larger than a 128-bit key.

Informative. Thanks for taking the time to reply.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact