That url points to a page that has a big green warning.
"This is an experimental technology
Because this technology's specification has not stabilized, check the compatibility table for the proper prefixes to use in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future versions of browsers as the spec changes."
Note that having a PRNG available doesn't mean it's being used properly. (See Debian and OpenSSL)
Ironically, the page he's linking to mentions that the function is an experimental technology and the specification has yet to be stable.
> But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school--we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated.
> Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can--if you know anything at all wrong, or possibly wrong--to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
It's the same for crypto. This is where all the "don't roll your own" stuff comes from. Use stuff that other people have built, and other people have poked at.
For most low to medium risk profiles HTTPS is safe enough. However in these cases in-browser encryption is rarely needed. For the threat model applications like Cryptocat faces, the danger for a targeted attack by someone with access to the CA infrastructure is not negligible.
So if can trust the server with your data, you don't need JS cryptology. If you can't trust the server, how can you trust the JS it sends?
This is the fundamental issue here. The trust model is purely, irrevocably broken. This isn't something you can paper over.
He address that with using chrome plugins and code verifiers. Not the ideal or most convenient solution obviously.
But you also can't assume the JS is hosted and delivered by the same server the data is being transmitted through. JS is extremely portable.
As noted in the second point of my response in the root of the thread, this isn't just a non-ideal solution, it's a non-solution. It doesn't solve the trust boundary issues inherent here.
Yes, but just because it can be mitigated doesn't mean that it is mitigated.
Sure, but I just want to point to the fact that this is a solvable problem. A lot of people talk as if it is some sort of insurmountable obstacle, whereas I think responsible programmers can solve it and move on.
A defence is always only as strong as the weakest link, and an attacker usually has all the time in the world to find and exploit it.
Optimism has no place here.
- having a misleading wired article (or several) written about you in no way qualifies you as an expert on cryptography. anyone who has any level of knowledge in the subject knows this guy is a total hack.
- everything that is somewhat correct about his products is due to people who _actually_ know stuff about cryptography telling him "no, no, no, you need to do X". the entirety of the architecture is effectively crowd-sourced. people telling you how to be a good carpenter is no substitute for being a skilled woodworker.
- nadim should stop making crypto products because bad crypto _puts people at risk_. if you make one bad product, i can see chalking it up to "well, at least he's trying". instead, nadim has routinely and repeatedly demonstrated his lack of real domain knowledge.
i suggest nadim make some turd-like web 2.0/3.0 driven product where security doesn't matter. he clearly has the tenacity to code but lacks the intelligence and domain knowledge to make security-related software.
If you do not know if your system is already compromised, you cannot guarantee that using your system is without risk. And that's even before we get to people willing to break down doors and other such nastiness.
That said, these are not "crypto flaws" - these are flaws that exist in systems even if the math of the cryptography is perfect.
That said, there's something to be said for interacting with other people. And focusing on security risks while ignoring denial of access risks seems silly.
Anyways, my reading of the people writing here suggests to me that almost no one here is really serious about security. There's too much focus on trash talk and struggles with properly characterizing a now-non-existant version of a blog page not enough focus on useful security mechanisms for me to have learned anything useful.
One thing to keep in mind, though: everything that happens on the internet has visibility. So if some code is advertised as stable and it changes from day to day? Something is wrong. Similarly if I download the same code from two different machines and I get something different for supposedly stable code, something is wrong...
And, one of the nice things about open source browsers is that - hypothetically at least - anyone can take the browser and build it for themselves, with their own monitoring and tracking code. This can take time, and modern browsers seem to be updated almost as fast as they can be built, but they do seem to have internal APIs which are rather stable. (Other approaches are also possible - what I am describing here is an off-the-cuff variant tailored at some of the risk suggestion raised in other posts on the page where I am responding...)
Grey boxes and human readable documentation for the win...
Now, I don't agree with every single bit of the original article, but I disagree with just about every bit of this one. I'll go point by point.
>> “Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request.”
No, it's really not mitigated by doing so. Because the crypto code is coming from the server you're attempting to prevent cleartext access to, the trust model is fundamentally broken here. The server could easily throw down JS code that does not perform proper crypto operations, leaks keys, or any number of other things. And that's not even speaking of XSS and other attacks making it trivial for an attacker to compromise your data.
Edit (based on edited OP): > [...] it is possible to deliver web apps as signed, local browser plugins, applications and extensions. This greatly helps with solving the problem.
Correct, this does mitigate some of the issue. Namely, it allows the code to be reviewed as it is in use. However, the server can still push code arbitrarily and completely compromise the crypto, and XSS is still an issue.
This is correct...ish. The problem is that it still gives too much trust to the server. The proper approach would be a browser plugin that exposes a Keyczar-style, high-level interface to JS crypto and allows secure operations on data without revealing access to the server.
However, this still isn't enough. Take the case of Gmail adding GPG support for emails. Let's assume you have a perfectly secure plugin such as one I described above. Even in this case, the server-driven JS still could send unencrypted data from the DOM up to the server, before crypto operations have happened. The trust boundaries are nonexistent here.
Let's just stop here. Doing a CSPRNG right is hard. Very, very hard. Thankfully, browsers are starting to implement this as part of the browser directly, making it much less likely that JS code can skew results. This should be a solved problem soon, if it isn't already.
Edit (based on edited OP): They now mention the CSPRNG built into modern browsers. As mentioned, this is 99% a solved problem.
This is one point from the original article that I think is specious. If you have the proper lego blocks, you should be able to put together things like PGP securely. But those blocks don't exist.
> However, the ultimate problem with browser cryptography is that there is no standard for innate, in-browser encryption. Very much like HTML5 and CSS, there needs to be an international, vetted, audited, cross-browser standard for browsers to be capable of securely encrypting and communicating sensitive information. There’s no denying the urgent need for such a standard , considering the ridiculous rate in which the browser is becoming pretty much the mainstream central command for personal information.
This I agree with. However, every approach so far has really, really, really sucked. See also: https://news.ycombinator.com/item?id=4549504
To do this properly, we need a standard for high-level crypto operations, which is incredibly hard to pull off, and we need sane trust boundaries to prevent the server from skewing operations and stealing data. I hope someone will get it right eventually.
Edit: Forgot to respond to one point.
There are a few issues here:
1) Generally speaking, the fact that JS source is pushed down on every access means that there's no way for you to actually review it.
2) Even if the crypto code is isolated and completely reviewed and found to be bulletproof, the trust boundaries are still screwed. C.f. the Gmail example above.
(Not a legal advice, consult an attorney before following anything in this post).
While with this specific case here they intercepted the passphrase server-side, they specifically hinted that it wouldn't have made a difference with their java applet.
"The extra security given by the Java applet is not particularly relevant, in the practical sense, if an individual account is targeted"
That said: yes, bad crypto will defend against government 'attacks' as well as good crypto will, if you assume they're going to play by your rules. But it's still better to do it the right way.
My point is that adding in-browser crypto to a web app (https-only site, no external resources loaded, yada yada) has substantial impact on whether the information is disclosed or not in the end, in particular when we are talking about legally compelled disclosure.
Do you agree or not?
Simply executing and parsing XMLHttpRequests does not mean that the JS app is executing arbitrary code from a remote server. It may simply be parsing JSON payloads. This is completely up to whatever code is delivered as part of the browser add-on, and cannot be remotely overridden by the server.
You are correct that XSS is an issue, but as mentioned elsewhere, so are buffer overflow attacks in compiled code. Any crypto implementation will have "issues" that need to be engineered around. That's the purpose of independent review and open source code.
The original claim by the Matasano article was that JS crypto is "doomed," which I believe is patently false linkbait. While no one would agree that JS crypto is "ideal," I believe Nadim has raised very good arguments to say that, at the very least, it can be done properly.
It is absolutely possible to engineer an application in that way, but that is not how the vast, vast, vast majority of applications work. Most of them throw data straight into the DOM, they include scripts from the server, etc, etc. It's possible to do this right, but I rarely see it.
> You are correct that XSS is an issue, but as mentioned elsewhere, so are buffer overflow attacks in compiled code. Any crypto implementation will have "issues" that need to be engineered around. That's the purpose of independent review and open source code.
XSS is easy to find, easy to exploit, and exists everywhere in real-world web apps. This is comparing the danger of a car heading towards you at 200mph to a baby coming towards you in a stroller. They're just worlds apart.
At the end of the day, nearly every single web application I've tested (hundreds, if not thousands) has fallen to the same issues. This trend hasn't changed, and I don't expect it to do so any time soon.
There's a big difference between a technology being fundamentally broken and engineers simply not using it correctly.
This same sort of problem applied to PHP years ago. The default behavior of the language made it easy to write horribly insecure code, and many developers did. But it has always been possible to engineer a PHP app in a secure manner. You just had to work a bit harder at it in the past.
You are absolutely correct that most people don't write secure JS code today. But the web is an evolving platform, and as standards and engineering awareness improve, this will change.
The need for better in-browser crypto standards is clear. In spite of this, Nadim has proved that it is possible to build secure JS apps today. It is specious to simply dismiss his contributions because most other engineers don't write secure code.
Of course, the post we wrote in 2011, years before Web Crypto, agrees with you too.
The blog post says that it is necessary for code to be loaded as a local, signed browser plugin. This does not make JS source pushed down on every access.
> Even in this case, the server-driven JS still could send unencrypted data from the DOM up to the server, before crypto operations have happened. The trust boundaries are nonexistent here.
Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?
When this response was written, the blog post said nothing of the sort. In fact, in that point the text has not been changed whatsoever.
> Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?
I'm speaking of an adversarial environment where the server is seen as an untrusted peer (hence the need for client-side crypto above SSL). The reason it's not considered is simple: servers get owned, services change their minds on issues, and people screw up. While it is theoretically possible to accomplish secure client-side crypto given enough constraints, this does not map well to reality.
> While it is theoretically possible to accomplish secure client-side crypto given enough constraints, this does not map well to reality.
Well, in my case, we got our browser plugin audited by Veracode and things worked out.
As I said, that point (the one you quoted me on originally) does not make mention of signing, and the quote you gave there did not exist when I wrote the comment, as you well know. This is silliness of the highest order.
> Well, in my case, we got our browser plugin audited by Veracode and things worked out.
"Things work out" until they don't. As a fellow security professional, you should know as well as I do: no matter how many audits you do, you're still fucking something up, and someone will find it if it's valuable enough to do so. This is as true of your code as it is mine or anyone else's.
Edit: Magikarp is correct, there are several other changes. I'll annotate my post as needed. Done.
Edit: Thank you for re-reading! :-)
One really should assume that if a service can be compelled to act against the interests of its users, that will eventually happen.