WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT
CRYPTO CODE?
You can. It's harder than it sounds, but you safely
transmit Javascript crypto to a browser using SSL. The
problem is, having established a secure channel with SSL,
you no longer need Javascript cryptography; you have "real"
cryptography. Meanwhile, the Javascript crypto code is still
imperiled by other browser problems
My minor quibble is that there are other reasons one might want cryptography besides establishing a secure channel, and so using SSL does not necessarily solve whatever problem one was trying to solve with JavaScript.
For instance, the client might need data that is stored encrypted on the server, and you want to do the decryption on the client.
I think the fourth of fifth graf of this (poorly written, I agree) article explicitly calls out that there are reasons beyond a secure channel why you might want clientside browser crypto.
So, I'm actually going a step past the obvious argument against jscrypto ("this is an inferior way to build a secure channel"). Not only is it an inferior secure channel, but it's also an inferior and dangerous place to try to build crypto functionality even if you're using SSL/TLS to establish a secure channel.
There is one (and probably only one) case when js crypto makes sense. And that is when server operator wants to convince third party that he cannot access encrypted data at rest stored on the server. Client side JS based encryption (and by the way also things like hushmail) does not help in any way when you don't trust the server.
Exactly. I always considered JS Crypto orthogonal to TLS. You still want TLS for authentication and establishing a secure channel to ensure you're getting the right code from a person you trust.
Javascript (or generally, client-side) crypto would still be invaluable for other things. If it can be implemented safely, you could do stuff like GPG for mails and OTR for instant messaging. For Javascript, that appears to be a pretty big "if", though. If it can't be implemented safely, well, then it's a matter of opinion. Some will argue the crypto is still useful despite a couple of viable attack scenarios, others will argue that an appearance of securiy is much worse than no security at all.
> Just as all programs evolve towards a point where they can read email, and all languages contain a poorly-specified and buggy implementation of Lisp, most crypto code is at heart an inferior version of PGP.
Delivering cryptographic software via a network, having it run in a "maleable" runtime and creating a user expectation that it will be secure, is not new.
We've been doing this on the desktop for years.
I'd like to pose a question to the authors:
Don't all the premises you've based "Javascript Cryptography Considered Harmful" on apply equally to downloaded desktop software?
-A chicken egg problem
-A malleable runtime for apps with admin access
-Code verification being defeated by other hostile apps on the system
-Inability to trust the secure keystore because of the chicken-egg problem and other hostile apps installed prior to the crypto app's installation.
It didn't hold up well against attack there but there was usually an underlying bit of assumed trustworthiness on the local network that shifted the blame.
But protocols designed for enterprise desktops always turn out to be an unmitigated disaster when run across a hostile internet, particularly from a security perspective.
Kerberos fits your description ("protocols designed for enterprise desktops") exactly. The hardest thing I've found about running it across a hostile internet is dealing with NAT issues. The (latest version of the) protocol itself is pretty decent from a security perspective.
Then again, the original version of it, when it was designed for enterprise desktops, would not be particularly effective.
Actually, Kerberos was designed at MIT for the explicit scenario of an untrusted network: "I'm OK, you're OK, the network is hostile".
It may work acceptably for VPN-like applications, but how does it work for actual internet applications? E.g., how do you enroll new clients into the authentication realm over the internet?
Yes, it was designed for an untrusted network. But, there are good reasons why older versions of the protocol are not trustworthy today, which go beyond the increase in computer power relative to the key space [1].
The problem with enrollment is more or less the same as the problem with enrollment using any other crypto trust scheme. In the most secure situation, you'll want to start the process in person.
In practice, with kerberos you have to trust the KDCs holding your keys. With asymmetric systems, you have to trust the CAs who have signed the keys of the parties you are communicating with.
I agree that it isn't a good solution for what we might call "web scale" applications. It is a reasonable way to do password authentication, but federated authentication would require establishing cross-realm trust between identity providers and relying parties.
We're talking about "problems" in the context of "things that make cryptography hard". The other security flaws of C/C++ are orthogonal to this issue.
This isn't a value judgement on Javascript. I like Javascript. The hard fact of the matter is, not every good programming environment is going to be suitable for cryptography.
Perhaps, but almost all buffer overflow, remote code execution bugs are very similar to XSS attacks--feed the program something it doesn't expect along with some junk for it to execute. The mechanism is different but the concept it the same.
(Note, originally in this proposal, I support the idea of using a browser plugin, although now I think it should be functionality baked directly into the browser)
By adding an encryption="" attribute to the html markup, and the user instructing the browser on how to utilize a private key on their system, you can encrypt and decrypt data in the browser, without requiring a trusted server.
Something like that seems horribly hard to get right.
Forget about cross domain scripting attacks, now you can't trust JS from the point of origin. Yet at the same time, you'll probably want it to be able to do all sorts of fancy AJAX-y stuff on the decrypted content. (Certainly on other parts of the page.)
You're right, in order to get the security right, you wouldn't be able to access the decrypted content with javascript. The browser would prevent that from happening.
Yes, potential functionality would be lost, but that's always part of the trade-off for security.
But I don't think the trade-offs would be detrimental, from a usability standpoint.
From a user perspective, how would you identify "safely" encrypted elements, in a way that can't by spoofed by the page rendering a nonencrypted element with appropriate styling?
Could be the same way the browser identifies "safely" implemented HTTPS (ie, popup, icon, etc). A nonencrypted element with appropriate styling would not have the same browser-level indications that an encrypted element had.
Since no one has brought it up yet -- I figured I would.
The entire time I read this article, I was thinking of Clipperz (https://www.clipperz.com)
Their entire product is based on the model of in-browser cryptography of your credential payload.
You can read more here: http://www.clipperz.com/security_privacy
and numerous blog entries, and other articles around the web, but its open source, so you're welcome to review it yourself.
The basic premise is that your browser, through javascript, deals with encrypting and decrypting a blob object which is stored on their server, transmitted via SSL, which they can't read, and can only be read by you when your browser decrypts it with javascript.
I'd love to see the author's take on the Clipperz implementation.
At first glance, without going so far as to create an account, they do a lot of stuff right.
I note that the HTTP session cookies set pre-authentication are not 'secure'. I.e., they can be be sent in-the-clear for HTTP connections as well. This may or may not be a real problem in practice.
I'm throwing a rock into the ocean here with all these comments. But I feel like the entire assumption of the article ie "SSL is expensive + hard". Is it really?
All the big companies are making the change to SSL. Sure it may seem expensive, but how many engineer hours will you throw away on cutting those costs? Will those hours have cost you even more? Further: what is the cost if your engineers' implementation is poor and easily broken?
You spend money on SSL, SSL tech gets cheaper and faster. Rinse, repeat.
I'm not trying to be a twit here, but be fair to the site admins: say that after *.matasano.com works fully under https without any mixed-content warnings.
Usually it depends on how complex and interdependent your site software has become. How many places do you have to find and fix where "http:" was hard coded? How many existing links and web APIs are going to break if you start returning redirects? How many third-party affiliates and SDK partners are going to be affected by your change? Are you going to discard everyone's existing sessions? How are you going to test all this stuff in advance?
I don't think it reads like "a cartoon samurai" (but then I don't know how samurai sound). What you're really noticing is how lazily written it is. That whole article is a single draft I wrote almost a year ago and never touched again.
You're seeing it because I was too hung over this morning to repeat all the arguments it contained when Javascript crypto came up on HN.
I didn't submit the article as a story; in fact, I've been sort of dreading it being submitted ever since I wrote the comment that had the link in it.
Don't dread, it is clear and factual and free from obvious typos. You've even got a jwz quote and radioactive boy scout references - I rather enjoyed it.
Sorry for being an asshole - it's actually quite readable and gets the point across, I just meant to comment on the somewhat dire tone. Says something good about your writing if a quick draft gets the point across and people want to read it.
Actually, I submitted it because I saw it on Reddit, liked it, and was surprised to see it wasn't submitted here. Though somebody may have submitted it to Reddit after seeing your link on HN.
Well, unlike this thread, it only has 2 people spreading misconceptions and inaccuracies about cryptography. And also unlike this thread, they're heavily downvoted. So there's that.
Probably because it's fun? If you spend half your day reading papers on fully homomorphic encryption, I think we can forgive a little levity to compensate.
Most browsers (including mobile ones) except IE support the <keygen> HTML element [1]. This creates a "secure" keypair. The public key can be sent to the server and the private key is stored locally somewhere (I guess gnome-keyring, Mac Keychain or Windows registry). I haven't ever seen this in use except in technical demos. While this can't be used for many things where JS encryption is in use, I'm sure it deserves more recognition than it currently has.
The vulnerability of implementing crypto with in-browser JavaScript seems to be coming up more and more in the past year.
Tl;dr: A MITM could serve you a fake implementation of said crypto and you could very well be encrypting your content with ROT13 while thinking you're getting some NSA-grade security or what have you.
My initial instinct is "browsers should expose some native crypto functions which have always-preceding namespace priority" but after thinking about it more, it doesn't matter. The attacker could just push code that leaks data before it even hits the crypto functions.
Looks like the only workaround is to use SSL/TLS.
(This is all assuming that the implementation of the crypto is solid, of course.)
That's a problem with serving java applets which do crypto, too. (there were some better protections for applets, but they're still vulnerable to code replacement by the server (thus not able to do hostproofing))
Unless the JS is served over ssl/tls then it is vulnerable to a MITM modifying the code en-route.
Throw in ssl and that fixes that problem.
An interesting example of this is glbse.com, it's a bitcoin stock exchange, on the site there is a client python library and a JavaScript client application (to run in the browser).
Putting aside how poorly documented (and designed) the JS client is, what they use it for is authentication/authorisation not... for actually encrypting.
All requests from the JS client are signed with a RSA private key, the server keeps a copy of the public key, and verifies any incoming requests.
The private keys can be stored in html5 local storage (encrypted if you want).
The keypairs are initially generated on the server.
When they first started making this available it was being served over plain text http, now they use SSL for the reasons above.
The idea is to remove the need for servers to store username/password combinations. The users account ID is a sha256 hash of their public key.
That said, as other commenter has mentioned, should the server be compromised, the JS crypto lib may be overwritten with something else that steals the decrypted content in the browser.
The thing is, the browser still depends on the server for the crypto implementation EVERYTIME the user logs in to use the application.
Native application also suffers from this, since any downloads of the native apps still depends on the hosting server not being compromised. However, the vulnerability only applicable when the application is being download, instead of over and over again in each time of usage as in the case of web application.
The only way I can think of is of using extensions. In this manner the trust issue happen once during the download of said extension. By doing this, the implementation would be almost as secure as native application.
Almost as secure, because web pages are still content+execution. For the solution to be secure, the pages needs to be secure itself, since other Javascripts or the implementation of the web app could may not protect the user from XSS based attack.(You could say the same about native apps though, as the security depends on how secure is the codes against things like buffer overflow).
I personally thinks that it would be the best bet right now.
Furthermore, and most importantly, by using SSL/TLS, the data being sent by the browser to the server is encrypted as it passes through the network, but however remains readable by the server when it reaches it. Whereas with client-side JS crypto, the server cannot read the sensitive data. If the writer does not trust the server to send correct JS code, how can he trust it with the sensitive data itself, in plaintext, which is what he is doing by using SSL/TLS? As I mentioned above, there exist browser plugins for a few projects to verify the integrity of the JS crypto they use.
A user looking at the "cryptocat session verifier plugin" has to think to herself, "What is this thing verifying? Is it just checking a digest of the Javascript files? How does it know if the CSS files have surreptitious Javascript in them that negates the crypto?"
Which is funny, because if cryptocat just required a browser extension that implemented all the crypto, it wouldn't need to verify anything at all. What was the point of Javascript crypto at all, if you're just going to push an extension onto people?
Your unease with SSL/TLS and its X509 PKI is entirely orthogonal to whether Javascript crypto works or does not work. Wanting an alternative to TLS does not give you an alternative to TLS.
It's funny how many arguments supporting Javascript crypto devolve to "but the world would be so much better if this stuff worked".
Actually, in the case of that particular plugin, every bit of code, including CSS, JS, and HTML, are verified.
> It's funny how many arguments supporting Javascript crypto devolve to "but the world would be so much better if this stuff worked".
It's hilarious how you completely ignore the much more pressing issues in HTTPS that I've pointed out. The server can read plaintext in the case of HTTPS, which is mitigated in JS crypto; not to mention that things such as SSLStrip and the COMODO hack have put many chinks in HTTPS's armour - HTTPS is a technology that derives a lot of trust from being able to authenticate the server, whereas we've seen authorities such as China create entirely passable fake certs to fool dissidents.
When the server is providing the client with its crypto code, the server can read the plaintext no matter what. I don't like these bullshit false promises.
If you are worried that China may own one of the root certs in your browser, remove the certs you don't trust. Hell, remove all the certs. Every mainstream browser allows you to do that. In most of them, you can even set up permanent exceptions for Amazon.com after you strip your certs out.
Meanwhile, HTTPS/TLS has over a decade of the most intensive study anywhere on the planet, and your favorite half-assed ad-hoc bespoke random "crypto-cat" scheme... does not.
> When the server is providing the client with its crypto code, the server can read the plaintext no matter what.
What? I don't see how that's necessarily true. The crypto code can be verified and if it turns out to be fine, then the server shouldn't be able to read the plaintext "no matter what." That's silly.
> Hell, remove all the certs.
Is this seriously your solution?
> ...does not.
This borders on ad-homimen - under this logic, we should never strive to attempt creating new techniques but always rely on 10+year old ones. There will always be new techniques that need testing, and the fact that older ones exist is not sufficient to prevent the study and advancement of new research.
Yes. I'm saying that if your execution environment is a browser, and your security code comes from a server, that server is going to be able to read your plaintext no matter what.
Yes. If you think China is intercepting your HTTPS traffic, remove all the root certs in your browser. Then browse to the key sites you're worried about protecting and create exceptions for each of them. China will not be able to use "fake certs" to intercept traffic to those servers.
Yes. In cryptosystems, 10+ years of study does count for a lot.
> that server is going to be able to read your plaintext no matter what.
I'm truly sorry, I follow you on Twitter and actually really respect your opinion, but that's just nonsense. the HTML, CSS and JS can all be verified, either by a plugin or by researchers studying what the server is sending, or by a variety of other ways. This ultimatum you're giving is silly.
> Yes. If you think China is intercepting your HTTPS traffic, remove all the root certs in your browser. Then browse to the key sites you're worried about protecting and create exceptions for each of them. China will not be able to use "fake certs" to intercept traffic to those servers.
That is not a real solution.
> Yes. In cryptosystems, 10+ years of study does count for a lot.
Of course - I never suggested it didn't - but that doesn't mean that new research can't undergo and survive skepticism.
You just stuck a plugin into the mix. You can make crypto work from a browser plugin. Just use the plugin for all the crypto; what the hell is the point of Javascript cryptography if you have a plugin? That's reckless to the point of negligence.
Although I don't really appreciates the tone here, I think that point that you are trying to make is that, if we are just going to use plugin, just use plugin all the way; why bother with Javascript for crypto at all.
I seriously don't appreciate your debate style, but I feel obligated to point out that your question has been discussed at another section of this thread: http://news.ycombinator.com/item?id=2935473
I think the point you are missing is that even if researchers/people can verify what the server is sending in general, there's no way for you to verify that the HTML/CSS/JS it sends you at that particular point in time is the same as the generally vetted version.
... and what feature of any Javascript browser crypto makes it anything more than trivial to defeat by such an attacker?
You can use sslstrip or malicious trusted CAs to argue the imperfection of SSL/TLS as deployed by browsers and users, but it does not follow that that somehow makes Javascript browser crypto any less broken.
I don't understand why a protocol with known major problems that can be compromised by oppressive governments is a better solution to client-side crypto that can be verified for integrity and hides the plaintext from the server.
Only the configuration of the CA-based authentication can. Nobody loves X509 PKIs, but so far, they seem to "work".
This matters because you can literally write a HOWTO that my mom could follow to get a browser configured so that China can't snoop on (many of) your HTTPS connections. No code required.
SSL/TLS is a better solution. I find it ridiculous you suggest that browser-based JS is better, frankly. Comodo was breached and the certificates were revoked using the theoretically-sound, tested, and implemented PKI solution. While you do well to point to existing attempts to work around JavaScript's deficiencies, it's genuinely surprising you could say they exceed the capabilities of SSL/TLS.
What about sslstrip? What about the fact that CAs are being hacked left and right, and that governments such as China are known to produce fake certs to wiretap dissidents? What about the fact that the server can still read your data? Again, how can you trust the server with your plaintext if you can't trust it to serve you good JS crypto code? There have been attempts to mitigate this problem in JS crypto anyway, using browser plugins that perform integrity checks.
> What about the fact that CAs are being hacked left and right, and that governments such as China are known to produce fake certs to wiretap dissidents?
This is an authentication and a trust issue. JS crypto does not solve this at all.
> What about the fact that the server can still read your data? Again, how can you trust the server with your plaintext if you can't trust it to serve you good JS crypto code?
If you don't trust the server, why are you sending it the data? And how is JS crypto any better?
If you're trying to bash SSL to promote JS crypto as a better alternative, you should probably choose problems that JS crypto doesn't also have.
> There have been attempts to mitigate this problem in JS crypto anyway, using browser plugins that perform integrity checks.
This isn't JS crypto. This is "browser plugin crypto, that we're choosing to compromise by tacking on a huge JS part that someone can backdoor".
For an unmodified and basically functional browser, in a way there isn't a "client" and a "server" context. There's only a server context. The client is actually a sandbox that has been designed to carefully partition off the server code from everything else that the server doesn't authorize, and hasn't got enough of an "identity" to be its own thing. A normal, correctly-functioning browser is designed to basically be an extension of the server in question, and there are no effective mechanisms for the client to have any state placed in it and fed back to that server that did not come from that server. Even when you type a comment into a text box and hit 'submit', what comes back to you is fed to you by the server in question.
When a server serves you a page, it basically owns that context. Crypto can secure further communication between that page and the server (via SSL), but there's basically no room to hide something from the server, and if there was that would probably be considered a browser bug at some level. If the server does not receive plaintext, which a network-level analysis may say it does not, it is only because the server has graciously consented to not receive some plaintext, not because you have actually somehow built a webpage that can prevent the server from getting that plaintext. One tweak to the server, it serves slightly different JS and offers a slightly different API and bam, it's getting plaintext.
Web pages just don't have enough identity of their own to do anything like this separate from the server, without further extensions (which is why the topic of plugins keeps coming up).
> JS crypto remains better since there exists techniques to verify the crypto, and the server does not receive plaintext.
You are giving the plaintext to code that has (at best) the same trust level as the server itself.
What data is it safe to give to that code, that isn't safe to send (in a way that can't be mitm'd) to the server?
> That's ridiculous. Integrity checks exist for many crypto-systems, don't perform crypto - they are an extension.
I'm not sure what you're saying here - you might want to clarify exactly what you mean.
In doing so, perhaps you could tell me why you think "browser plugin validator + untrusted JS crypto code" is more secure or otherwise better than "browser plugin crypto with no JS".
Don't get me wrong, browser plugin crypto probably works fine, but it's also possible to use a browser plugin to turn untrusted JS crypto code into trusted JS crypto code. The plus side of that is that the crypto code would still work without the plugin.
So your answer as to why JS+validator is better than a plugin doing crypto is that "it still appears to work fine even when the system is utterly broken".
...
When it comes to security, that is actually a negative.
Whether you consider it "utterly broken" or not without the validator depends on your trust of the server - similarly to HTTPS, which is vulnerable from the trust standpoint thanks to CA vulnerabilities.
Furthermore, it's possible to verify the code manually, although tediously, whereas a CA impersonation is perfectly transparent and very difficult to detect. The validator plugin usually does the job.
...and your trust of the network itself, unless you're using HTTPS in addition to your ad-hoc solution.
And vulnerabilities in the trust chain are not HTTPS vulnerabilities - they are trust chain vulnerabilities. It's straightforward to address them yourself simply by not trusting any CAs you don't trust.
> unless you're using HTTPS in addition to your ad-hoc solution.
Yes, I don't see why not.
> It's straightforward to address them yourself simply by not trusting any CAs you don't trust.
From that standpoint, we're assuming that everyone with a browser knows what a CA is and discerns between trustworthy CAs. I could equally say that it's straightforward for users to simply review the JavaScript code they don't trust - but the problem is that there exist many users who don't know how to review JavaScript and don't know what a CA is, and will just rely on what their browser tells them (which is where a plugin comes in handy.)
> but the problem is that there exist many users who don't know how to review JavaScript and don't know what a CA is, and will just rely on what their browser tells them (which is where a plugin comes in handy.)
> a plugin can be verified locally at length whereas a compromised CA can issue a fake cert that is very, very difficult to detect.
Which is it?
Either you're claiming that verifying a plugin is easier than not trusting CAs that you don't trust, or you're arguing two completely contrary positions.
edit:
And this then brings us back to the earlier point, which is why not just use the plugin for crypto in the first place?
The plugin is the same for all users - not everyone who downloads and runs GnuPG knows how to verify it - but some people might go as far as to either compile it from source or ever decompile it. If they find something suspicious, they will report it so that other people who have downloaded the software can be aware. Same thing goes for the plugin (except that the plugin is easier to verify than a compiled binary.)
When I talk about people who can't review JavaScript or CAs, I'm talking about the average computer user. I am not saying that "verifying a plugin is easier than not trusting CAs that you don't trust." The average computer user doesn't care about CAs or JavaScript.
My point remains: a plugin can be verified by others who have downloaded it - a compromised CA is extremely difficult to detect.
1. just trusting the cert say for Amazon directly rather than trusting the chain of trust using CA (distrust all the root CA and just trust the cert of Amazon)
Absolutely, to the point that it's ridiculous to compare it to client-side Javascript.
Comodo was breached and the certificates were revoked using the theoretically-sound, tested, and implemented PKI solution.
Weeeellll actually... Mozilla, Google, MS had to rush out a code patch to manually blacklist the fraudulent certs. Revocation checking is implemented so weakly in browsers and other HTTPS clients that it just doesn't work when it comes down to it.
Of course, the browser Javascript doesn't have any problems of weak revocation checking. It's simply altogether unauthenticated in the first place!
Furthermore, and most importantly, by using SSL/TLS, the data being sent by the browser to the server is encrypted as it passes through the network, but however remains readable by the server when it reaches it. Whereas with client-side JS crypto, the server cannot read the sensitive data.
This is a shortcoming in the argument made in the article, because it wrongly assumes that we would always want the server to decrypt our data.
I am not saying JS does solve this, but SSL doesn't, and it isn't even meant for that use case.
Or it can mean you trust it to do something that you know and only that.
Which one do you refer?
The only way to verify behavior of codes is to do source auditing yourself. Which is what our average Joes would not be able to do himself anyway, so you get back having to trust entities instead of behavior. Which in that case, we are either get back to CAs or trusting individual certs directly.
It's easier for an attacker to passively observe a stream than to actively inject specifically tailored code into it.
At least with JS Crypto, amateurs with Wireshark and Firesheep probably can't steal your stuff. But since that's all the protection it offers, it shouldn't be advertised as anything more ("This NOTEr 2.0 App uses AES-256 in JS over HTTP to provide NSA GRADE security and privacy!"). In that sense, it probably causes harm to use JS crypto, but JS crypt isn't absolutely ineffective.
No, it isn't. It's actually harder. To intercept a stream, you need only capture a few packets. To passively observe it, you have to position yourself in a place that sees all the packets.
You're talking about computational difficulty. He's talking about practicality for people who aren't experts. AFAIK there is no browser plugin that you can download that will help you MITM an HTTP session, whereas there is a browser plugin that will let you passively capture plaintext from said session.
No, I'm talking about practicality. An attacker can intercept a session simply by controlling any part of the DNS resolution chain. An attacker can passively observe an entire session only by being situated somewhere in the actual path between the victim and the server.
And how do you do that at the corner Starbucks without using the command line?
Edited to clarify since apparently I can't reply to a child comment: There are tools that require next-to-no technical expertise to passively observe HTTP sessions. There are no such tools that I'm aware of to execute an HTTP MITM without control of the network infrastructure.
I think what the original poster meant was basically just "in a situation where you absolutely, definitely, cannot have SSL, javascript crypto is an iota better than nothing, because at least it breaks Firesheep and Wireshark".
What do I care whether there's a pretty UI for an attack?
HTTPS/TLS is designed to be secure against adversaries with millions of dollars dedicated to attacking it. Why am I even dignifying systems that rely on the attacker's inability to use a command line, or write Perl code?
Have you reasoned through how much code it would take to make a "Firesheep" that worked against Javascript crypto implementations? It's not even a little bit hard.
This sounds like the US military's rational for not scrambling video feeds, and using armour piercing rounds in the Middle East. The Soviet Union can unscramble insecure cryptography, and their soldiers will have armour, so insecure encryption and non-armour piercing bullets are considered harmful.
JS encryption is certainly less secure than SSL, and may give a false sense of security to people who want to use it as a replacement.
The command and control channel is, and always has been, encrypted -- because that's both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions -- with everything constantly changing. Key management in this environment would be a nightmare.
So do it, and render the entire line of discussion pointless. This argument comes up often when discussing JS crypto, and while I agree it's silly for any system that actually has assets worth protecting--personal or financial info, for example--there are plenty of attackers that don't have financial motivation, resources, or knowledge, and just want to use the easiest crap available to them to mess with other users.
And how do you do that at the corner Starbucks without using the command line?
Despite what Hollywood says, using a command line does not make you a super leet hacker. If your security system can be broken with a command line, then I doubt you care that much about the data you're securing. Just put the data in plain text and call a spade a spade.
It's only a little easier. There are three scenarios I've been able to come up with:
The attacker triggers some kind of bug or odd mode in a router. That router happily floods packets to the wrong places, but still only accepts them from the right place.
The attacker can decrypt your WiFi plaintext, but not in realtime. You've already received the response from the server by the time the attacker can craft one for you.
The attacker is afraid of getting caught (he possibly needs to be on-location, and can't work through an intermediary), and is therefore restricted to passive methods.
As it happens, all three of these are pretty unlikely. They aren't impossible, but why put forth a large effort for JS crypto when you could put forth a small effort for SSL?
Here are some ways to modify traffic on typical networks, most have command-line or GUI point-and-click attack tools available. Some of these are documented to have been conducted on wide blocks of internet traffic.
The attacker broadcasts a SSID of 'Free Public Wifi', or one similar to what you normally use, then DoSes all other access points when you try to use them.
The attacker is on the same ethernet broadcast domain and replies to your DHCP query quicker than the legitimate DHCP server does.
The attacker poisons the router's ARP cache and impersonates your MAC.
The attacker leverages the unauthenticated nature of Spanning Tree Protocol STP.
The attacker pwns your home wifi router through a default password on its admin page or any number of other bugs.
The attacker messes with your DNS name resolution for sites of interest. Remember that time DNS resolution for some S. American (and a few US) ISPs got routed through the great firewall of China?
The attacker advertises a more-specific route for your IP address or that of a target site. Remember when that little ISP in Pakistan took YouTube offline?
The attacker could be your ISP working under order of some government. Yours, that of a country you're visiting, or of some territory the packets pass through along the way.
Or you could be using a hotel, airplane, library, mobile, etc. internet connection where transparent intercepting MitM proxies are standard fare. How good do you think the hotel staff are at securing that internet billing appliance they bought?
And of course, the attacker could always cut a cable somewhere and install a tap.
Was Amazon ever vulnerable to session resumption attacks? How?
Either way: every vulnerability we've found in SSL/TLS (the protocol) makes me more confident in it. Those findings are the product of millions of dollars of attention. Why would we think that any alternative to TLS wouldn't have the same flaws, or worse ones?
Was Amazon ever vulnerable to session resumption attacks?
I have no direct knowledge. I was personally extremely careful to never test any actual sites.
But unless they were running MS IIS or certain brands of SSL offload devices on every accessible host with the cert on it, they were likely willing to conduct client-initiated renegotiation. I know they use client certs for EC2 stuff, that likely involves a server-initiated renegotiation opportunity as well.
Just checked, they still haven't patched for the actual protocol fix, RFC 5746
How?
Frank Heidt came up with an awesome little exploit that isn't affected by anti-CSRF mitigations. Just find one single URL under HTTPS which redirects to HTTP. (E.g., https://www.amazon.com/) Inject a request to that and now you have a plain HTTP request to play with. You could enter an sslstrip scenario, replace the cert with a legitimate cert to a phishing site at a point the user isn't expecting the URL to change. How many users stop browsing after receiving a mixed content warning I wonder?
Other researchers showed you could replace the EV cert with a simple DV cert at that stage and the browser would still show the green bar.
Either way: every vulnerability we've found in SSL/TLS (the protocol) makes me more confident in it. Those findings are the product of millions of dollars of attention. Why would we think that any alternative to TLS wouldn't have the same flaws, or worse ones?
Absolutely. Those who criticize TLS need to understand why it is the way it is and either propose improvements (the IETF [TLS] mailing list is open) or propose a replacement that really does a better job delivering on all the security properties, not just one or two.
> How many users stop browsing after receiving a mixed content warning I wonder?
That's a blog post (and supporting data drop) that I would be interested in reading. Based on my experiences at a former employer the rate of cart abandonment on mixed pages is about 20% greater than on fully valid pages (someone hard coded a logo reference into a template) but the sample was fairly small and I would be curious to know what the difference would be on a broader sampling of sites.
From what I hear, newer browsers are said to be tightening the screws on mixed content and giving increasingly annoying warnings. For example, showing a red slash through the lock icon.
If you were very selective about it, and pulled it off without any disruption, you could get away without anyone noticing. Or, do it from a third-world ISP, and say "It was an accident whaduyagonnadoaboutit".
If there are big BGP changes, the world notices. If the BGP for the specific netblock your site is hosted on changes, the world probably does not notice. Your users will see slightly degraded performance (actually, in some cases it will improve), and (presumably) a number in the NSA will be <1% higher for a few minutes, but that's about it.
I've worked on software that literally (intentionally) publishes BGP updates every ~10 seconds. It didn't bother anyone, because it wasn't playing with many addresses.
I mostly agree with the gist, but it's not really the Javascript that's the problem. It's trying to do crypto in code downloaded over the network. Implementing crypto in a Flash/Java/Flex/Silverlight/other flavor-of-the-month applet has all the exact same problems. Well, you might have a secure RNG (doubtful), but apart from that the problems are all the same.
Likewise, it's probably possible (though very difficult, and still not a good idea) to have good crypto in JavaScript as long as it's either server-side or built into the client.
Java crypto doesn't have all the same problems; the Java plugin environment has enough code to run crypto inside of it, and Java applets are (or can easily be) self-contained enough to verify integrity. Browser Javascript (and by extension Flash) is a uniquely hard place to make crypto work.
If by "has enough code to run crypto inside of it", you mean "the SSL support and the javax.crypto library", sure, it gives you some useful stuff. You can also access all that stuff from JS without ever writing a line of Java code, so that's a wash.
As far as self-contained Java applets go, every attack that can be used against JS can be used against Java. It's a trivial exercise to instrument any method call you like, including calls to any method that attempts any kind of verification. Java bytecode is really quite transparent.
I don't see how implementing (correctly) an already known one-way hashing function (SHA/MD5 for example), and then using this to form part of a basic challenge-handshake protocol is not better than sending the password over the wire.
In this way the attacker would be unable to gain anything useful from seeing the challenge or handshake packets, and the only remaining vector is some form of keylogger.
Under the heading That attack sounds complicated! Surely, you're better off with crypto than without it?
"Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request. Intercepting requests does not require advanced computer science. Once an attacker controls the web requests, the work needed to fatally wound crypto code is trivial: the attacker need only inject another <SCRIPT> tag to steal secrets before they're encrypted."
This seems to happen whenever Javascript crypto is brought up. It's crazy. SSL a little bit of work and it costs $ to purchase a cert (but not very much). It's a bit more CPU overhead on the server but most people have cycles to spare. For the most part SSL should be no big deal.
What am I missing here? Are people to cheap to purchase an SSL cert? Theoretically the PKI is only as trustworthy as the CAs but that can't be why people are acting like SSL/TLS isn't even an option.
Challenge-response hashing in JS is MitMable and likely to have implementation errors.
SSL has defenses against MitM attacks, and is implemented in very widely-deployed libraries, so if there's an implementation error you'll hear about it before it's used on you.
The drawback is of course that you need to have a signed key. That is the price you pay for MitM resistance. We're not supposed to use self-signed keys (which are free), even though they are strictly better than JS challenge/response hashing.
Under the "Really? Why?" section, start with the first point:
If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code. The same attacker who was sniffing passwords or reading diaries before you introduce crypto is simply hijacking crypto code after you do.
You don't have to anything that sophisticated; you can just intercept one of the HTTP requests that builds the page and insert code to steal the passwords.
In the usability/security tradeoff security companies will always try and scare you to the right (they make money doing so). most developers are comfortably being in the middle - after all, do we really need to implement signed client javascript (with all the problems and centralized infrastructure that imposes) just to store todo items?
I seem that this article contains a lot of good points about Javascript cryptography. The error of the author is that he is unable to see a reason to build a secure application based on Javascript because his prejudices are too strong.
I am the founder of Passpack. We use Javascript cryptography from 2006. We have 16,000 active users and nobody ever reported loss of data. I think that if you are conscious of all the issue connected with browsers and their problems, you can build a good system that is able to protect not only the security of the users but also their privacy.
And the privacy can not be protected with a server-based cryptography system.
I think using TLS is largely the right solution, and "shouldn't be hard" (a $400 certificate is the hardest bit).
However, the article assumes that modify attacks are as simple as intercepted, which I do not believe is true. It's not about GUIs. Re-routing DNS requires holes and access that a properly configured infrastructure won't give you.
However, as long as you can put up a competing DHCP server at an open Starbucks network, you can probably own anything non-TLS.
While I agree with most of the issues raised in the article, I disagree with the blanket conclusion. With HTTPS, JS crypto can add to security when applied appropriately.
In particular, requiring hacker to compromise not only the server but also each client adds to security and limits damage footprint, much more so if combined with external blackbox-style monitoring of served contents and scripts to catch server-side compromise timely.
The clients don't need to be hacked, they simply run code as instructed by the server using standard, documented interfaces. Once the server is compromised, or any other party that can serve non-image content into the page origin, it's game over.
I work on stuff in used in payment systems too and I agree. (It wasn't me who downvoted you). Back when we could see the scores, it was possible to downvote something that you considered overrated or alternatively refrain from downvoting it if you didn't feel it deserved to go negative.
I didn't mean client has to be hacked but did mean script from compromised server has to run on client to decrypt user data. Hacker has to compromise the server then, for each connecting user, serve malware javascript.
Meanwhile, external network of monitors, masquerading as real users, detect that server is compromised and alert the admins who can take appropriate actions. Overall, compromised server can be shut down in minutes during which only a fraction of user base will have been affected.
Or the connection in between, unless SSL is satisfying that part of our threat model.
then, for each connecting user, serve malware javascript.
No problem there, that's a core function of a server. It's a feature of MitM tools as well.
What's more difficult is an attacker who wants to target only a few specific users.
Meanwhile, external network of monitors, masquerading as real users,
A nice idea, let me know when you get that implemented.
detect that server is compromised and alert the admins who can take appropriate actions.
That's always a fun conversation to have. Be prepared for denial, anger, bargaining, and depression before getting to acceptance.
Monitor network: "We've found sometimes users are receiving the old version of the Javascript, you know, the buggy one? Particularly when certain key words like 'democracy' occur in the text."
Site operator: "Oh, must be an old file on one of the servers."
Monitor network: "We think it might be a targeted attack"
Site operator: "Well, what are the chances of that"
Monitor network: "This is not a random phenomenon like earthquakes"
Site operator: "But, you can't prove it, and not all users are affected, right?"
...and so on
Overall, compromised server can be shut down in minutes during which only a fraction of user base will have been affected.
This scheme should not be confused with actual security.
How does this not add to security?
More moving parts to break and make things worse. Every bit of script and supplemental page content is another weakest link.
And they will break, because they're being developed by those who are ether don't understand or are not willing to admit that they're pushing snake oil, even after the systemic flaws in their scheme have been pointed out repeatedly.
> Or the connection in between, unless SSL is satisfying that part of our threat model.
From my original comment: "With HTTPS."
> A nice idea, let me know when you get that implemented.
It's part of a service I'm currently previewing privately.
One of the projects I worked on was a sophisticated fraud-detection framework which was deployed to one of the largest banks and acquired by a top security company. Stuff like this is not easy to implement and work never ends but then that's why we get paid.
I could see it working well enough for banks. But banks are an unusual scenario. They (and maybe insurance companies) are the only businesses that have become accustomed to having so much loss due to fraud that they can handle it as aggregate risk and a cost of business.
for each connecting user, serve malware javascript.
For most web servers, this involves replacing a file. And since your attacker can just go to the page with crypto JS to receive the "clean" version, s/he can prepare the malware JS in advance.
Overall, compromised server can be shut down in minutes during which only a fraction of user base will have been affected.
How does this not add to security?
Oh, it adds to security. For the length of time between compromise and a replaced JS file. As noted above, s/he can easily rewrite your JS appropriately before entry, in which case you've got about a minute or two of extra safety.
If you assume you can detect any compromise of your system within 3 minutes, then yeah, 1 or 2 minutes out of 3 is great. If you miss it, then it's a whole lot of work for barely any benefit. And assuming you can detect any compromise within 3 minutes is highly optimistic.
Any malware script served to users will also be served to monitors and compromise is just a matter of comparing hashes with originals.
Re any compromise, only detection of compromise to content/script served to client is necessary in this approach. In extreme case, public read-access to most of the server might even be plausible (although not advisable) IMO.
And when China hacks your server, takes a little bit longer, and inserts a rule that only serves the malware JS to Chinese-allocated IPs, will your monitor systems be checking from mainland China, too?
Maybe they don't modify your app's code, but insert an iptables rule that achieves the same effect.
When a scheme relies on perfect intrusion detection you are not discussing security, you're discussing damage control.
There are safety measures one has take to deploy stuff I describe to protect against subset attacks like you mentioned as well as hackers using sleepers or logs to identify monitors, etc. It's not easy stuff and definitely not something startups should take on w/o getting side-tracked.
This is why I wrote "when applied appropriately" in my original comment.
While largely true, there is still scope for client side javascript encryption for applications where the service provider largely just wants to prove to some external agency (e.g. the government, or some court issuing a subpoena) that they can't have access to your data.
That'd be a hard thing to prove considering the service provider (and every other party serving non-image content into the page origin) could access the data any time they wanted by simply sending some script.
So are there situations where it is reasonable to do JavaScript cryptography?
Here's a hypothetical: you have producers and consumers of certain documents. You want a system where a producer can produce a document for a particular consumer, put it on a server, and that consumer (and only that consumer) can retrieve the document and read it. Ideally, you want to this as unobtrusively as possible from the point of view of the consumer. If the server is compromised, the intruder should not be able to read any of the documents. It's hypothetical, but something I once worked one would have benefited from such a system, so I spent some time considering how it could be designed.
One approach would be to have a public/private key pair for each consumer. When a producer prepares a document for the user, it is uploaded encrypted using that consumer's public key. The encryption would be done on the producer's client machines (producers are on a company LAN using computers administered by a competent IT department, and can be assumed free of malware).
On the consumer side, we need to solve three problems: (1) generation of the public/private key, (2) storage of the private key, (3) decrypting downloaded documents.
One approach to this would be to tell the clients to install PGP, generate keys, and send the public key to the producers. Clients could then download documents via their browser, and run PGP to decrypt them. That fails the "unobtrusive to the client" requirement.
So let's consider the JavaScript approach. The JavaScript can be delivered over SSL, with all elements of the page coming over SSL from the same server (which could be separate from the document storage server).
For generation of the public/private key, the lack of a secure random number generator in JavaScript would seem a hindrance, but is it? Couldn't a CSPRNG be used in JavaScript, seeded with something derived from the user's password? That would have entropy limited by the entropy in the user's password--but how is that different from using a secure random number generator but then storing the private key encrypted with the password? In either case the difficulty of getting the private key for an attacker is limited by the entropy of the password.
If the private key is generated as above, it can be regenerated from the password as needed so there is no need for key storage.
It seems to me that the above sidesteps most of the issues raised in the article that are specific to JavaScript. The ones that remain, where someone has compromised the client machine via another route, would also apply to a client implementation done as a stand-alone application. If the client machine is compromised, the client's documents are vulnerable.
Let me also propose one more scenario. What about JavaScript that is NOT delivered over the net on each use? For example, suppose (still using the hypothetical producer/consumer example) that when a consumer signs up for the service, the sign up server generated the public/private keys, then dynamically created a static page with embedded JavaScript that includes the private key (encrypted with the consumer's password) as data. This static page is bundled up with any other resources it needs (images, style sheets) into a zip file, and that is given to the client.
The client unzips this, and when he wants to get documents launches the static page in his browser. The static page references its resources by file:// URLs. Essentially it is an application that happens to use the browser as an execution environment. The network is only used to make an AJAX call to get encrypted documents from the server.
If the server is compromised, the intruder should not be able to read any of the documents
If the server is compromised, the attacker could put a bad JS file in the HTML page (which is sent to the user), which will read the contents of the textbox showing the decrypted content, and then send this to $EVIL_HOST
""That would have entropy limited by the entropy in the user's password--but how is that different from using a secure random number generator but then storing the private key encrypted with the password? In either case the difficulty of getting the private key for an attacker is limited by the entropy of the password.""
Because you just removed the requirement of the attacker to steal (or at least compromise) the laptop, to get access to the encrypted key: the entropy of the key was still high, it was simply protected on the laptop with a weak password; that's very different from deleting the key and forcibly reducing its entropy to that of the password.
bit.ly/ndt4ss
What does it take to get some attention on HN? Do you have to be part of some secret cabal? I solved all the problems he talks about (and I worked with a crypto guy):
- Statically encrypt and publish content on HTTP server
- Transmit these via HTTP to an iframe component at client browser
- HTTP-iframe locally sends message to HTTPS-iframe via window.postMessage()
- HTTPS-iframe decrypts content (with pre-shared key) and renders it on page
I implemented a library to do this (3 years ago!). Anyone care to tell me just ONE vulnerability with this? Thanks!
bit.ly/ndt4ss
FTA:"THAT ATTACK SOUNDS COMPLICATED! SURELY, YOU'RE BETTER OFF WITH CRYPTO THAN WITHOUT IT?
There are three misconceptions embedded in that common objection, all of them grave..."
I guess I shouldn't lock my car, either, right? After all, someone could simply break my windows and steal whatever I have inside. Or maybe, just maybe, using some protection actually is better than using no protection, provided you don't put more faith in the protection than the protection warrants. Just like I wouldn't store gold bars in my car, I wouldn't use jscrypto to protect things that were too important for that level of protection. I will, however, feel okay about keeping my sunglasses in my car, and I will feel okay about using jscrypto to send my phone number over email.
I'm glad you mentioned this because I feel it's worth a response.
When you develop protocol or infrastructure code the more general the facility is that you develop the less control you have over how the result is going to be used.
For example SSL was originally developed for the purpose of making people feel safe transmitting their credit card numbers to e-commerce websites. Credit cards (like cars) have a pretty predictable upper bound on the financial loss should this system break.
However, the SSL/TLS protocol was very general and it has grown to be used for everything under the sun today. From protecting Twitter APIs to "launch the missiles"-type critical systems.
So when you advocate a general technique like Javascript browser crypto, you don't get to qualify it with "only for stuff that really isn't worth attacking anyway". You don't know in advance what data it's going to be used for, much less what the value of that data will be to attackers in the future.
"Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request. Intercepting requests does not require advanced computer science. Once an attacker controls the web requests, the work needed to fatally wound crypto code is trivial: the attacker need only inject another <SCRIPT> tag to steal secrets before they're encrypted."
This is strictly true from the perspective of cryptography. Crypto is always concerned with the strength of crypto from a mathematics point of view.
However, much of that worldview falls down in the real world. In the case of hashing a password before it goes over the wire (even the simplest salt-free MD5), which is crypto-logically naive and "considered harmful," you are providing a general barrier for people that are FireSheeping in a cafe. Many would-be attackers are thwarted by what crypto-nerds would see as silly.
In general, the weakest point of crypto in today's world is the human element. RSA Security got hacked not because the algorithm was weak but because someone installed a back door in RSA's network for the hackers. Once you get passed the point where it is more effective to attack sideways than head-on, the crypto has done all it can do.
That being said.. just use SSL. It moves that gauge enough that anyone trying to get your users' information isn't going to bother direct attacks.
Edit:
What I meant by the 'salt-free' MD5 is that of a challenge-handshake response. The challenge is sent by the server and the client responds with md5(pw+challenge). The salt would otherwise need to be sent in an insecure manner that would provide only marginal more security than a challenge-handshake. This was unclear and I apologize.
What barrier? If you're sending password hash over the wire, and the server allows access based on this hash, the hash is essentially the password. Firesheep, BTW, steals cookies, not passwords.
For instance, the client might need data that is stored encrypted on the server, and you want to do the decryption on the client.