Likewise, an attacker who breaks into the server can't get a dump of all the notes in the system. They again have to modify the source code which could be noticed by the owners and users and takes some level of sophistication more than the average aim and shoot attacker
As a legal matter, I don't believe there are any precedents for a vendor being forced to put a backdoor into a product they are distributing to others to be used in a place outside of the possession of the vendor.
While still vulnerable to all sorts of technical problems as described in TFA, a website that treats the encryption/decryption code as a product they distribute to customers may be subpoena-proof. Just because there are actions I can take to help the police gather evidence doesn't mean I can be compelled to do them. Turning over evidence in my possession is entirely different.
 I would love to hear one example of a vendor being legally compelled to backdoor a product. Things that definitely are not examples but people often say are examples include: Lavabit, Clipper, RSA's BSAFE, Hushmail, the NSA modifying hardware on their own, CALEA.
I would also, at this point, accept non-Internet analogues. Has a car dealer ever been required to install a GPS tag in a car they sell to a customer, for example?
 I would not recommend anyone volunteering to be the test case.
edit took out mistake about BC versus Canada
EDIT To be clear, I'm questioning whether Hushmail was ever actually required to give a backdoored version of their code to anyone. This is opposed to having to give over information that was or would be within their servers at some point, even if Hushmail had to modify their systems to keep it.
It does mean it should be dismissed. "Not perfect" crypto means broken crypto.
The author effectively dismisses the point being about a malicious server: especially with "Just use the SSL", because, well, if you're landed on the server, SSL means squat.
The author is concerned about:
- the unavailability of secure functions necessary for crypto (as simple as RNG, as useful as an AES call, or as complex as a full-blown PGP API with transparent, ad-hoc key management) in browsers
- the unavailability of a secure, non-monkey-patchable runtime environment for JS crypto code to execute in, guaranteeing one can use the aforementioned functions as intended
- the vulnerability of the code as content in the channel itself (when not using SSL) or in the browser itself (XSS and all)
All of those being legitimate causes of concerns (IOW in crypto world, gaping holes rendering client-side JS crypto untrustable)
That one at least can be dealt with by web workers
And to address the quote from the article that you don't need JS Crypto if you're using SSL/TLS. It's a wrong assumption, there are still uses for it. It adds more security to the user data which is encrypted before it hits the server. A hacker owning the server at some point won't be able to get the data. He could, in theory, plant a exploit to collect user passwords (as they log in) but it's unlikely that it wouldn't get noticed very quickly.
I think Nate Lawson does a much better job of making this argument than I did:
> I made JS crypto a very minor part of the talk because I thought it would be obvious why it is a bad idea. Apparently, I was wrong to underestimate the grip it seems to have on web developers.
That was in 2010. Judging from many of the reactions here, today, web developers still very very very much want to believe it's possible.
Although smart people sometimes feel silly repeating themselves, some messages need to be delivered at regular intervals, especially messages most people don't want to hear.
> Browser crypto can be scary. Do you have an evil extension
> installed? We can't tell. Further, have we been tortured
> you're not that important.
> So: only use this page if (1) you feel your browser is
> clean and (2) a life doesn't depend on it.
The good thing about keybase is that they also provide a cli tool for interacting with the service, so your private key never needs to go near a website.
I personally use a smart card and reader, so even the cli tool couldn't read my private key if it was compromised.
I had a keybase.io account for a few days and then deleted it recently. It seemed pretty nice, but then they sent me some invitations and it dawned on me that I don't know anyone else who would use it, and it adds even more complexity on top of the existing system, so isn't going to be that useful for newbies either.
They directly address some of the arguments against JS crypto: https://code.google.com/p/end-to-end/
When End to End was released I was curious if any proponents of "JS Crypto Considered Harmful" would have a response, but I didn't see one, nor do I see one here. I would be interested to see a response for why the Google approach is flawed.
- chicken and egg problem: being a browser extension, the code for End to End is distributed up front, when you do (presumably) trust the server. If the server is compromised in the meantime, you are not affected. Also, you could download the extension from a source you do trust and then use it on websites that you don't trust.
- content-controlled code / malleable runtime: End to End, as a browser extension, isolates itself from the code/content of actual websites.
- no RNG: "CS-PRNG is available thanks to WebCrypto."
- secure erasure: "The threat model we are trying to address discounts adversaries with physical access and users with malware outside the browser. Chrome’s design means that extensions should be safe against other extensions. Adversaries with this level of access have a plethora of attacks available to compromise data even in systems that control their memory carefully and wipe it."
- no secure keystore: ? (I don't know)
- timing and other side-channel attacks: "End-To-End requires user interaction for private operations in normal use, mitigating this risk. Non-user-interaction actions are rate-limited and done in fixed time. End-To-End’s crypto operations are performed in a different process from the web apps it interacts with. The End-To-End library is as timing-aware it can be and we’ve invested effort to mitigate any exploitable risk."
Browser extensions are still part of the browser. They still have DOMs, for example.
Scenario 1: You run a note-storing service as described in the article. Your site is delivered through TLS, using one of the non-DHE ciphersuites. You used your CA's web interface to generate the certificate and private key, because it's simpler than generating your own CSR. An NSL with an attached gag order requires your CA to submit all generated RSA private keys to the NSA and shut up about it.
Scenario 2: You run any site with a login prompt. You are using OpenSSL.
I wouldn't want anybody betting their life or freedom on browser based crypto, but if widely adopted and well implemented, maybe it is good enough to make mass surveillance significantly harder? If everybody's crypto code is being tampered with, somebody is likely to notice.
Mass market encryption is a worthy goal, and browsers are a reasonable platform to achieve that goal. Educating users about not installing extensions that can filter pages in their browsers, recommending incognito mode, etc are good steps. And of course TLS should be considered a minimum requirement for transport. Browser encryption is never going to be military grade, but it's a step up over unencrypted communication.
I'm not sure if this argument is actually flawed, or just anathema to grown up cryptographers who prefer hard maths to wishy washy politics/economics?
And if you plan to implement it over unencrypted connections, no, nobody will notice if the NSA does mass interception of those. There is no way to notice it.
Over TLS? The gain would be that the server doesn't have to see the plaintext. Or at least in order to see a lot of people's plaintext, it will have to get away with sending bad crypto code to a lot of people, some of whom are likely to notice.
> if you plan to implement it over unencrypted connections
I'm not particularly planning to do anything, but if I was I imagine I would be doing it over TLS.
I would say that the overlap between the people that are likely to notice (or, for the matter, that are likely to read the reports from those who notice) and the people that benefit the most from transparent, client-side crypto is close to negligible.
For mainstream users (who shouldn't even be exposed to the concept of keypairs, since it blows their minds), you can give them a binary client and trust that their desktop isn't compromised, or you can give them a TLS connection and manage your server security.
While I've always enormously respected your opinion, you seem to have a very black/white perspective, aggressively attacking and/or belittling anything that isn't an infallible solution, even if it's a significant improvement for many or most scenarios.
You don't seem terribly pragmatic in many cases. I realized that long ago in a discussion on passwords and the browser - https://news.ycombinator.com/item?id=2000833
So many password incidents since would have been a complete non-incident with such a solution (or anything similar), so I've always remembered your raw negativity because it wasn't a perfect solution: That because it didn't solve every possible issue, it's better to solve no issues.
There is no such thing as an infallible solution. Start with that. When you're talking about actors who are exploiting RNG weaknesses, broaden your horizons a little.
PGP needs a key file. You need to (optionally) enter a passphrase. All someone needs to do is steal your key file and circumvent your passphrase (either of which there are countless mechnanisms to achieve. They aren't trivial, but if we're talking about organizations that are taking advantage of imperfect RNG generators...) and boom, PGP has been rendered a false sense of security over the history of your communications. I mean, if we're talking about rogue actors taking over servers and injecting false script, such a situation is just as viable.
Everything is on a gradient. Any simplification (such as "fallible versus infallible") is just garbage time.
Again, and I realize Ptacek is a bit of a hero around here, his words above question, but I go back to his response to that password thing, which was the moment I understood the disconnect between big security talk, and actual security. When the alternative is (and continues to be) nothing -- which is exactly the case in the password discussion -- discarding options because they don't cover every scenario is absurd. It is grossly destructive, just as it's destructive to discredit PGP because it requires access to a keyfile.
Or, to quote cperciva's talk (https://news.ycombinator.com/item?id=7883707), "the purpose of cryptography is to force the US government to torture you." If a cryptosystem makes torturing you for the required information easier than attacking the cryptosystem itself, the cryptosystem is "strong enough." Any system for which this isn't true isn't doing its job.
No security solution is absolute, but instead every security approach is a "false sense of security" from some perspective, and every solution is steps on a gradient. Anyone who thinks otherwise is simply naive.
Do we eschew SSL rather than give users a "false sense of security"? There are many potential vulnerabilities to SSL, from stealing private keys, to co-opting or compromising root authorities, to more esoteric mathematical vulnerabilities.
Yet we still use SSL. We find a pragmatic medium where we achieve the greatest security possible within realistic confines and restrictions, understanding that there are (and always will be) potential weaknesses, and improve as we go. Simply tut-tuting and rote repeating the "false sense of security" nonsense does literally nothing for anyone. It is pseudo-enlightened babble.
On the other hand, I'm still learning about crypto, so I find these conversations illuminating. Thinking about ways to attack security, or trying to understand how other people approach attacks has given me a far deeper understanding than merely reading about it.
However, that's a personal bias because being critical of what I learn is how I learn. Needless to say, I used to routinely drive teachers crazy...
Sometimes it's right, sometimes it's not. I remember being yelled at that anti-spam products were stupid because "I can't stand losing even 1 mail out of 10,000!!" (In reality, email is a lossy system that doesn't have four 9's of reliability anyway.) I heard the same thing about all sorts of products from people in both industry and academia only to watch them become significant parts of the IT world. Nerds like black/white answers, especially about areas where they aren't experts. It makes the world much easier to understand.
(And I think it was a stroke of marketing genius for Zimmerman to call his product "Pretty Good Privacy" instead of anything implying perfection.)
So I have sympathy for your position here.
 and I still regret not following up with Dave Mann about his attempts to create a numbering system for attacks because someone else at the table said the AV vendors would never go along
With JS crypto, that goes away - if the JS is shipped over the wire, it can just be silently replaced.
If you're relying on SSL to protect the JS crypto, why bother with the JS crypto in the first place?
When vendors ship some kind of client-side pre-built verification capability (i.e. code that DOES NOT come over the wire), much like the existing SSL stack, things will change dramatically.
PKI, as currently imagined, is less convenient than just speaking freely and feeling a nagging sensation that someone might be listening (which is what people are doing now.)
That seems to be the first important thing to establish here. Alice sends public key to Bob, Bob encrypts the data using said public key and returns the encrypted data to Alice. What guarantee does Alice have that Bob didn't turn around and also post the plain text of Alice's request to [name your favorite social network here]? By the same token, what guarantee does Bob have that Alice won't turn around and post the plain text of the response on [name your favorite social network here]?
So it seems that inherently, even in the most secure of communications, there has to be trust between the parties exchanging the messages.
Please forgive me if this is already being discussed. The thread is rather long at this point so admittedly I didn't read every part before posting this.
EDIT: for correctness, cleaned up references to bob / alice.
One argument seems to be that there's no need for the added layer on top of SSL. Then I would ask, so why have passwords at all? After all it's guaranteed that the communications are secure. It looks like the disconnect here is that encrypted messages on top of SSL are behaving as a mechanism to help authenticate the recipient of the messages, like a password, and not trying to act in the same way that SSL behaves as a way to secure communications between 2 parties.
I'd like to suggest that the software agent that has access to your private keys should NOT be in the same process as an agent that directly handles data from the internet. The reasoning is simple: defense in depth, and the principle of privilege. Really, this is no different than the idea of running a web server in a chroot(2) jail.
A proper solution that might be something that can be provided in some sort of browser extension (*though it's likely to have platform-specific requirements) is to simply call gpg as an external process. It shouldn't be hard to wrap that up in an API provided by the extension.
Of course, it would be even nicer if the browser's provided that feature directly, as they could utilize platform-specific features such as a secure-password-entry UI (e.g. pinentry)
NSA has got the keys.
I'm an activist in a regime and extra level of indirection may slow them down.
Never trust any input, nor run any code, delivered by a remote host, unless its original author's digital signature has been verified by multiple distinct trusted mirrors, and as long as there is no way to change this input or code without repeating the process.
Edit: Reading on, I see that the author addressed SJCL and browser plugins separately, but I still think that most of the problems raised in the article could by solved by using them in conjunction. That gives you a secure channel for delivering peer-reviewed encryption algorithms.
Cryptology is founded on an academic, mathematical, and logical basis. Your don't assume your attack is active or determined, you assume that any flaw that exists could and will be exploited against you.
The founding principle of modern cryptography is to paraphrase: "If an attack knows the entire system, but if they lack the key they cannot recover the data."
Yes, it does. It means you are no longer using cryptography and simply doing obfuscation at that point. Mathematical crypto is out the window and gone, you admitted it. Your goal isn't crypto, but stopping casual and passive eavesdroppers. So you might as well use DES, or RC2. The algorithm doesn't matter any more. Its just a question of defining casual.
My post is regards to client side webpage crypto, not application level node.js crypto.
> Your goal isn't crypto, but stopping casual and passive eavesdroppers. So you might as well use DES, or RC2. The algorithm doesn't matter any more.
>Nonsense, it's not obfuscation.
The issue with in browser JS you seem to be completely missing is if you can't trust the network to deliver your message. How can you trust it to deliver your crypto unchanged, un-tampered, etc?
The answer is you can't. If you create a system where you can, you render your crypto unnecessary.
The weakness of JS crypto is that the data you receive (the JS crypto library) can be from someone you can't trust. You shouldn't use JS crypto if your attack model includes active attackers on the network.
See what I did there?
So authentication? We are talking about Authentication right? Because I said that. The weakness of DHE is that you can't authenticate who are you are talking too.
>You shouldn't use JS crypto if your attack model includes active attackers on the network.
A crypto model has to include every possible attack vector. Crypto isn't something that happens in a vacuum. You assume the worst always. If you ever say, "No, nobody would go that far" you aren't doing crypto, your doing obfuscation. Once a message leaves your hand, everything that can go wrong, will go wrong.
In crypto research people suggested moving away from SHA-1 because an attack was found, not because it was remotely possible but because it logically and mathematically existed. Crypto is about numbers and logics, not models and cost expense analysis, and what an attacker could do.
Obfuscation is a game of frustration where you create a bar, and keep raising it. Your system is fundamentally broken, but you count on people getting frustrated, or bored. Before they break it. Its game of wits.
The system you're talking about isn't crypto. Its obfuscation. Your counting on somebody not moving beyond A,B, and C. Getting frustrated and giving up. Crypto doesn't do that.
I'm going to respectfully decline discussing what attack models are and their use in security protocols on HN, as you unfortunately appear to not understand what I mean (which is probably my fault) or what attack models are used for in security research. You also appear to be confusing security research and cryptography research. You can mail me if you want to discuss this further.
One of many, this isn't the only issue. Even if you authenticate the host what prevents tampering via MitM attacks?
> All your ranting against what I'm saying is also directly applicable to plain Diffie-Hellman
yes and no. Standard white paper Diffie-Hellman yes. But implementations can fix that. As you point out.
>That does not disqualify plain Diffie-Hellman from having some use
>and that does not disqualify JS crypto from having some use
Wrong. Because you keep dodging my main point. Anything that solves the problem of:
1) Is who your talking to the right person (authentication)?
2) Is your code arriving unchanged (authentication + encryption)?
The two main issues with JS crypto. Solves what ever problem your trying to solve with JS crypto. And any solution that allows you create secure JS client side crypto, fundamentally makes it obsolete. That's the problem I'm going on about.
Well, there we have it. I happen to have worked on a project
where plain Diffie-Hellman was used not long ago. The active attacker model was genuinely not a threat/solved by other methods in this particular context, and simple confidentiality provided by a plain Diffie-Hellman key exchange was enough. As another example, plain Diffie-Hellman is also used in SSH sessions without predistributed keys . Clearly, the presence of active attackers is not always a problem in the real world! If that is the case, then there also exist legitimate uses for JS crypto .
As a sidenote nitpick, no implementation of plain Diffie-Hellman can fix the authentication issue. There are instead variations built upon plain Diffie-Hellman (notably authenticated Diffie-Hellman) that solve the authentication issue. But these protocols are not the same as plain Diffie-Hellman.
 RFC 4253, section 8
 I feel that I should stress again that although this is true for these particular use cases, JS crypto should not be trusted in general.
This is a slippery slope argument. We solved the issue with X therefore we can solve it with Y. The problem is when you solve the issues JS browser side crypto has, you render it unnecessary.
If you can.
1) Authenticate the message is unchanged. I.E.: Assures libraries you are using are secure.
2) Authenticate who your talking too. I.E.: Assures the host is the correct host, not a MITM attack.
3) Ensure the message is unreadable in transmission. I.E.: Insure any eaves droppers are not loging you.
What use is there for client side JS crypto?
You just solved every issue with 2 way encrypted communication. Just build it into a tool kit and you render JS clientside crypto unneeded.
Also to outline, compiled JS node.js application level stuff is useful. In Browser JS crypto is bad, completely untrustworthy and broken.
Just like OpenSSH without server keys is vulnerable to MITM attacks, but still both useful and protected from eavesdropping. Just like people use plain Diffie-Hellman to negotiate a shared key, despite lack of authentication. Sometimes the active attack is simply not considered a problem, or the risk is simply accepted .
This is not a recommendation. This is a discussion about people having different requirements and uses. Almost all JS crypto used on the public internet does it wrong. But that doesn't mean that JS crypto is 100% wrong in any conceivable case. Most people using it are just stupid.
: Choosing which risks to accept or not is daily practice for anyone dealing with security issues.
Attacking the strawman. No one man will revolution JS crypto. Not you, not Turning. Mathematicians are involved, when they all agree on stuff generally they're right.
The systems your proposing do nothing. And you know they do nothing.
>that means that they may be useful to people who are aware of the security
But you don't care. Its a false sense of security. Your selling rice paper advertised as bullet proof vests. That's my problem.
Yes these programs are entertaining thought experiments, but in the real world they're useless. And if you advertise them as being anything other then useless, you're no different then an 1860's snake oil sales man.
Also you mentioned assumed risk. Assumed Risk is understanding that there exists OpenSSL exploits you don't know about, you have no way of fixing, and no way of preparing for. You have to assume this risk to achieve any level of security. You have to be mindful that the landscape of security can change at any moment, and maybe render you insecure.
Assumed Risk does not mean your using a broken cryptosystem, and you know its weak points that exist BY DESIGN. And are fundamentally impossible to patch out. That's just called broken crypto.
But just because it's "the best we can do," that doesn't mean it's a good idea.
EDIT 1 minute before I made this comment 'tptacek said the fallacy better elsewhere on this page "we really need browser crypto to work, therefore it works"."
Most CVEs that come to mind assume a somewhat determined attacker.
It takes a reasonably determined attacker to commit to rails without permission  or run a ten-line perl script to crash a server  too.
Waving away a problem via "the bad guys would need to think for more than one second" is not exactly reassuring.
Whether you view that attack model as something worth considering depends entirely on context. But it's a valid view for many applications. As long as people don't use it with any expectations of security under active attack models, I'd say that's okay.
My argument would be that trying to protect against passive attackers with JS adds nothing beyond what SSL already offers.
Which is already required as a matter of course, and already compromises the payload if SSL is broken (again).
As pointed out elsewhere in the thread, there are few attacks that allow you to listen in on an SSL connection's content without also allowing you to modify that content - say, with a version that pastebins your keys.
Hence my argument that JS cannot provide anything SSL lacks, plus or minus some wishful thinking. Combine this with the fact that it's impossible to protect against a MITM-modified JS payload (see the "chicken-egg problem" portion), and you have a rather uphill battle here.
On the other hand, plenty of people who can modify the traffic exist. Starting with every employee at your ISP and the Amazon picker who shipped your router.
The NSA are well resourced, but I doubt they have much more than a second spare for every internet user on the planet. So speaking as a relatively uninteresting person, I actually would find that somewhat reassuring. If you're a Person Of Interest, that's obviously a very different situation, and you shouldn't use browser-based crypto.
So, yeah - if you rolled your very own library that's unique to this planet for exactly one website, congratulations! You're secure as long as there are no attackers! Doesn't really say anything useful about your security though. Or about the viability of using a fundamentally broken crypto platform to do crypto.
Now, I'm not defending JS as a method of doing this, but there is a use case.
To me, that's the biggest issue.
As for the rest of the title, http://meyerweb.com/eric/comment/chech.html
One idea: a way to bundle, hash, and optionally sign a set of HTML/CSS/JS resources (not unlike a Chrome extension). If the bundle is updated the user can be prompted. If the user desires, they can check to see if trusted individuals or groups have already reviewed the code, or review it themselves. Perhaps the code is hosted on Github (or wherever) and people can comment on questionable changes there.
* Using closures to encapsulate "private" variables is pretty bulletproof, AFAIK.
* ES5 features like "Object.freeze" and "strict mode"
* Object-capability safe subsets of the language, e.x. Caja, SES, etc
* CommonJS-style module system
Here's the thing about why JS-in-webpage crypto is fundamentally useless. The purpose of cryptography is to, somehow, secure some sort of communication between two parties. Maybe it's as tiny as a zero-knowledge proof, maybe it's a full-on encrypted general-purpose channel like SSL/TLS provides that can send arbitrary data back and forth in real-time, maybe we're trying to send small proofs of a message's integrity via hashes or signatures, maybe we're sending messages from our past self to our future self and want it safe in the meantime "at rest", but crypto is always about some sort of communication between parties A and B (the traditional Alice and Bob, even in that last case Bob is just an older Alice wearing a mustache).
Web pages run in a sandbox in which they fundamentally become part of the server's environment briefly. They're just an extension of the server, and are essentially designed from top to bottom to ensure that the only thing a web page can access is what was sent by the same server. The only cookies a server can get are the ones it set. (Modulo some cross-internal-domain stuff, but it doesn't change my point here.) The only requests it can manipulate are the ones it sent. The only resources the web page will access are ones the server tells it to. (Yes, a server can direct you to resources on other servers, and that's actually a big deal, a big delegation of trust, and one of the trickiest corners of browser security.) The web browser forbids a page from the server from accessing the hard drive on its own, or accessing any other site's stuff, etc. Structurally, the web page is just an adjunct to the server, by design, and anywhere this property fails is considered a security hole and fixed as quickly as possible. Without any ability to access any local resources that were not themselves originated from the server (i.e., HTML local storage doesn't get you out of this, the server fully controls it), the web page has no independent identity to assert. It is totally in thrall to the server.
Note how I kept saying "web page", and not "web browser". There's a big difference; web browsers are allowed to do a lot of things a web page is not. Pages have a distinct execution context and security policy different from the browser.
In the crypto sense, there's no communication between a "web server" and the "web page"; it's all just one system. The web browser may use SSL/TLS to communicate the necessary information to form a web page, but once that is done, the web page is deliberately put into a context where it is just an adjunct to a server, and there's no distinct two parties with which to communicate anymore, from a crypto sense. This may seem counterintuitive, because we see messages flowing back and forth, but that's all internal chatter, the browser functioning as an internal bus for the page/server unified security context. We have turned the full power of our cryptography and security research into making web browsers that enable web pages to function as extensions of the web server. It was and is not easy.
Further, what defines "server" is not the human word or concept, which can be distracting. What defines the "server" is whatever finally produced the bytes that were used to create the web page. If your browser isn't using TLS/SSL, that turns out to be pretty must "just whoever felt like serving some bytes to you". (If you don't think that "intercepting web requests" is practical, it is. It's off-the-shelf tech for hackers. Do not assume it is hard.) On the other hand, if SSL is properly used (and let's skip over what that means and the validity of cert authorities, etc), then you do have assurance that the bytes came from your server without anybody in between, and the web browser is providing assurance to the web page that it is on an uninterrupted channel.
When not using browser crypto, it just doesn't matter how you spin around; the attacker owns the web page, and you can't do anything about it. And when I say "own", I mean it, fully, the web page is actually functioning as an adjunct to their server, and you're stuck with the results. It doesn't matter what crypto you think you're pushing to the user's browser, because what's actually happening is that the attacker is pushing their crypto to the user, and its relationship to what you inteded to push is entirely and solely up to their good graces, and by definition we're pretty much talking about people without good graces. Without SSL, you have ZERO control over the user's webpage, and the attacker has all of it. Unsurprisingly, there's no crypto system that can survive that restriction.
When using browser crypto, SSL/TLS is providing you the maximum degree of assurance that is possible already that the channel is secure, more or less to the maximum extent the network will permit (i.e. attackers can observe byte flows or who you connect to and there's not much that can be done about that). The argument that in-page crypto is useless amounts to the observation that this binary situation admits of no "threading the needle", especially in light of the fact that without SSL being used we pretty much get to assume that an attacker can do absolutely anything to the data between the user and your server, and it's pretty hard to construct a crypto system that can stand up to the attacker arbitrarily manipulating it on the way to user, which is what many people here are trying to do.
(Incidentally, clearly understanding the difference between the web page and the web browser is also important for understanding why this is particularly a problem for web pages and not so much other things. It's because web pages go through so much effort to run in a sandbox such that the server doesn't get any additional permissions it shouldn't have via sending malicious web pages.)
(There's a better blog post in here struggling to get out; this is my first attempt to put this in words. The server probably needs a page/server distinction too, for instance; the web page isn't running with the full "server" privileges, of course, it's actually also running as a "page" sort of thing too, where the browser and the server are collaborating to create a single unified security context hosted within the two of them.)
Bring Your Own Filesystem (https://github.com/diafygi/byoFS)
Example chat demo: https://diafygi.github.io/byoFS/examples/chat/
It seems like an unhosted-style app (unhosted.org) can mitigate all of the OP's concerns.
I address some this in the byoFS README and again in an /r/crypto discussion.
Since the webapp is unhosted, the webapp is built to work when being served from anywhere, including your local filesystem. This means that you could download the webapp anonymously, inspect/audit/checksum it, then run it from your local filesystem in the browser (try it! just right click and Save As...). Alternatively, you could load it from a server that you would trust to kill itself rather than comply with a secret court-ordered compromise (e.g. Lavabit, Internet Archive, etc.).
Additionally, since the webapp is just static files, all the webapp server sees is anonymous requests (over https to prevent MITM). It doesn't know who is requesting the static files, so it would be difficult to perform a targeted malicious injection. You would have to broadcast the injection, which is generally ill-advised since it might be spotted by a vigilant third party. Most of the surveillance injection attacks that have been leaked have been targeted, so this basically cuts off that attack vector.
So, in order to compromise this webapp through injection, you'd have to hack into the trusted static server and blindly serve the injection to everyone who requests the webapp (hopefully including your target). This is basically the same attack vector you'd have to do if you were trying to inject something into a download-and-install local application.
True, which is why WebCryptoAPI should be prioritized, and I can stop using SJCL in byoFS. Once APIs for crypto primitives are baked into browsers, this argument disappears.
It's certainly much better than inspecting desktop apps. Also, if your webapp is unhosted, you can certainly publish a signed hash of the static files, which can be verified after you download the app and before you run it.
One infrastructure improvement that might be very helpful would be to be able to buy an SSL certificate from a CA that is limited to a particular file hash, which the browser checks before showing the connection as "validated" (maybe a checkmark beside the https lock?).
 - https://github.com/diafygi/byoFS#security-and-philosophy
 - https://pay.reddit.com/r/crypto/comments/289w7x/bring_your_o...
Then calling it "cryptography", when all the article talks about is hashing passwords. There's more to cryptography than hashing passwords, and not all of it is susceptible to the attacks described.
Third, the faulty logic of "this new technique has a bad edge case, so we should completely reject it, despite any benefits".
The article just glosses over the situation where the script is served securely and yet you hash at the client. "We already have a secure connection, anyway" the author claims. Sure, but the server still gets the plain text password, because TLS doesn't hash, it encrypts (and you can decrypt). If the server is passing the login request to a third party to check the hash, turns out it's still a good thing to hash at the client so the intermediating party can't abuse its role, without the horrid UX of OAuth. Has the author thought of that? Nope, just dismissed the potential outright.
Plain old-school TLS/SSL connections have exploitable edge-cases as well. Should we "consider them harmful"?
Intelligent conversations about security require nuanced opinions that go into when something is useful, and when it isn't, and let us make the call if it's harmful or useful for a given project.
Calling something "harmful" outright and only listing the cons without the pros isn't this kind of intelligent conversation. It's just counterproductive scaremongering.
I do have a question though -- you say that 'you can deliver the JS crypto with SSL, but then it's irrelevant because the connection is secure,' so I'd like to know, what's your opinion of blockchain.info?
It adds JS crypto on top of SSL, and provides a signed browser extension, for reasons that seem to make sense given its use case.
In theory, you could even walk up to a brand-new (assuming uncompromised) computer and reinstall the plugin. But you would still need some way of knowing that you were installing the same version you decided to trust earlier. Recognizing checksum pictures, I guess?
This overlooks a major use case of client-based cryptography where you don't want to expose any private keying material to the server.
As the client, I care about my protection.
If my secret keying material instead is on some company's servers it can be taken surreptitiously (by court order or not).
The biggest issue is that the crypto code can be changed at any time transparently. It's typically distributed by the same party that has your data so all you're really doing is transferring trust to them.
What you are calling an "edge case" is the main feature since the first iteration of the language. That it runs server scripts on the client. And yes, it does mean that it must be completely rejected for crypto code.
If the article gloes over something, it's the fact that those problems are fundamental. They can't be ammended, and won't go away at any time in the future. Looks like it should make this still more obvious.
I guess that's the second article I upvote on HN... At first I tought "No shit! But I'll take a look", but after reading this kind of responses, yes, I think it should get a few days at the frontpage.
Nobody even knows what 'fixing' xss would look like.
The main problem is backwards compatibility, as older browsers don't support them, but the idea that people have their head in the sand re. XSS is complete nonsense.
If "writing HTML" exposes cross-site scripting vulnerabilities on your site, then maybe "writing HTML" isn't so easy after all.