Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript Cryptography Considered Harmful (matasano.com)
127 points by mazsa on June 17, 2014 | hide | past | favorite | 140 comments

I disagree that just because its possible for a malicious or comprimised server to send evil javascript, javascript crypto should not be done.

You have to analyze the threat model. Take the note taking webapp they use as an example. Without javascript crypto, anyone who breaks into the server can read everyones notes. Likewise, the FBI can get a warrant and have the owner of the web application turn over everyones notes.

If they do use javascript crypto, instead of just getting a warrant for the machine, the fbi has to convince the owner of the site to modify their source code to target an individual user to steal their key. That is a significant difference. Also, the victim user has to log in again for that attack to be successful, if they just used the server for a couple weeks and left, they are safe.

Likewise, an attacker who breaks into the server can't get a dump of all the notes in the system. They again have to modify the source code which could be noticed by the owners and users and takes some level of sophistication more than the average aim and shoot attacker

So is javascript security perfect right now? no... but that doesn't mean it should be dismissed

None of that requires JavaScript. You basically just described the Lavabit model, where user passwords were used to derive key material that was never stored on the server. If you trust the server to give you good client-side code barring exceptional cases, you might as well trust them to discard your password after use.

There is a difference in that Lavabit did have possession of the passwords and even plaintexts, however ephemerally.

As a legal matter, I don't believe there are any precedents for a vendor being forced to put a backdoor into a product they are distributing to others to be used in a place outside of the possession of the vendor.[1]

While still vulnerable to all sorts of technical problems as described in TFA, a website that treats the encryption/decryption code as a product they distribute to customers may be[2] subpoena-proof. Just because there are actions I can take to help the police gather evidence doesn't mean I can be compelled to do them. Turning over evidence in my possession is entirely different.

[1] I would love to hear one example of a vendor being legally compelled to backdoor a product. Things that definitely are not examples but people often say are examples include: Lavabit, Clipper, RSA's BSAFE, Hushmail, the NSA modifying hardware on their own, CALEA.

I would also, at this point, accept non-Internet analogues. Has a car dealer ever been required to install a GPS tag in a car they sell to a customer, for example?

[2] I would not recommend anyone volunteering to be the test case.

Law enforcement allegedly backdoored Hushmail. Why do you think that isn't a valid example?

I know Hushmail said "we could end up having to ship out a compromised Java executable" but did that ever happen? I think they shot off their mouth thinking only of "well, I guess the government can make us do anything, right?" and deciding that was how the government could do it.

edit took out mistake about BC versus Canada

EDIT To be clear, I'm questioning whether Hushmail was ever actually required to give a backdoored version of their code to anyone. This is opposed to having to give over information that was or would be within their servers at some point, even if Hushmail had to modify their systems to keep it.

> So is javascript security perfect right now? no... but that doesn't mean it should be dismissed

It does mean it should be dismissed. "Not perfect" crypto means broken crypto.

Nothing is perfect. Various crypto algorithms have minor weaknesses but we still use them. Your OSS compiler could have been compromised decades ago ("On Trusting Trust"). The NSA could have hidden a backdoor in every CPU.

It's not just a malicious server, an attacker can put himself between you and the perfectly secure server and then send you whatever javascript code he likes, that's the whole point. And yes, since it cannot be made secure, it should be outright dismissed.

> It's not just a malicious server

The author effectively dismisses the point being about a malicious server: especially with "Just use the SSL", because, well, if you're landed on the server, SSL means squat.

The author is concerned about:

- the unavailability of secure functions necessary for crypto (as simple as RNG, as useful as an AES call, or as complex as a full-blown PGP API with transparent, ad-hoc key management) in browsers

- the unavailability of a secure, non-monkey-patchable runtime environment for JS crypto code to execute in, guaranteeing one can use the aforementioned functions as intended

- the vulnerability of the code as content in the channel itself (when not using SSL) or in the browser itself (XSS and all)

All of those being legitimate causes of concerns (IOW in crypto world, gaping holes rendering client-side JS crypto untrustable)

> - the unavailability of a secure, non-monkey-patchable runtime environment for JS crypto code to execute in, guaranteeing one can use the aforementioned functions as intended

That one at least can be dealt with by web workers

Worker = myMaliciousWorkerImpersonatorFunction;

How exactly can he do that when all the traffic is encrypted via SSL?

And to address the quote from the article that you don't need JS Crypto if you're using SSL/TLS. It's a wrong assumption, there are still uses for it. It adds more security to the user data which is encrypted before it hits the server. A hacker owning the server at some point won't be able to get the data. He could, in theory, plant a exploit to collect user passwords (as they log in) but it's unlikely that it wouldn't get noticed very quickly.

I think the problem is when using JavaScript crypto instead of TLS, not in addition to. The article should probably address this specifically.

Not just: any attack that can inject arbitrary code in a web page (e.g XSS attacks), and other crypto oriented attack due to lack of serious primitives being available (e.g timing attacks or attacks on the RNG)

Sure but I'm not clear on how this would be made worse by in-browser crypto.

That's not a problem with JavaScript, though. In that case, he could send whatever he wanted.

He can not send a browser plugin, nor can he send desktop software.

The only (dangerous) thing an attacker can send on this case is Javascript.

Of course he can send you executable content right away. He can also send you to a page and instruct you to download something.

Previously: https://news.ycombinator.com/item?id=5123165

I think Nate Lawson does a much better job of making this argument than I did:


It seems to be worthwhile to post it again. Nate Lawson wrote:

> I made JS crypto a very minor part of the talk because I thought it would be obvious why it is a bad idea. Apparently, I was wrong to underestimate the grip it seems to have on web developers.

That was in 2010. Judging from many of the reactions here, today, web developers still very very very much want to believe it's possible.

Although smart people sometimes feel silly repeating themselves, some messages need to be delivered at regular intervals, especially messages most people don't want to hear.

Sorry, I didn't notice that it was submitted previously: TFTL.

I've been thinking that it's time this article make it to the front page. If this was an old school forum, I'd ask the mods to pin it at the top. There are just too many damn discussions of JS crypto on here. It's like flowing glass all over again.

When an article hasn't had significant attention in about a year, reposts are usually ok (https://news.ycombinator.com/newsfaq.html). So you're good.

good luck with the audit.


That would be a funny joke, except the guy who submitted it seems to be dead serious in thinking he found a secret message.

I'm curious if anyone who worked on this article would be willing to comment on what they think of Keybase.io's client-side crypto implementation:

    > Browser crypto can be scary. Do you have an evil extension 
    > installed? We can't tell. Further, have we been tortured 
    > into serving you custom, targeted JavaScript? Hopefully 
    > you're not that important.

    > So: only use this page if (1) you feel your browser is 
    > clean and (2) a life doesn't depend on it.

If you use the web based crypto from keybase.io, your key can be compromised if they are compelled to do so, or they are hacked.

The good thing about keybase is that they also provide a cli tool for interacting with the service, so your private key never needs to go near a website.

I personally use a smart card and reader, so even the cli tool couldn't read my private key if it was compromised.

I had a keybase.io account for a few days and then deleted it recently. It seemed pretty nice, but then they sent me some invitations and it dawned on me that I don't know anyone else who would use it, and it adds even more complexity on top of the existing system, so isn't going to be that useful for newbies either.

Since this article was written, Google released an alpha version of a JS-based crypto tool: http://googleonlinesecurity.blogspot.com/2014/06/making-end-...

They directly address some of the arguments against JS crypto: https://code.google.com/p/end-to-end/

When End to End was released I was curious if any proponents of "JS Crypto Considered Harmful" would have a response, but I didn't see one, nor do I see one here. I would be interested to see a response for why the Google approach is flawed.

I'm a crypto novice, but from my (layperson) reading here is how the Google approach addresses the points made in this article:

- chicken and egg problem: being a browser extension, the code for End to End is distributed up front, when you do (presumably) trust the server. If the server is compromised in the meantime, you are not affected. Also, you could download the extension from a source you do trust and then use it on websites that you don't trust.

- content-controlled code / malleable runtime: End to End, as a browser extension, isolates itself from the code/content of actual websites.

- no RNG: "CS-PRNG is available thanks to WebCrypto."

- secure erasure: "The threat model we are trying to address discounts adversaries with physical access and users with malware outside the browser. Chrome’s design means that extensions should be safe against other extensions. Adversaries with this level of access have a plethora of attacks available to compromise data even in systems that control their memory carefully and wipe it."

- no secure keystore: ? (I don't know)

- timing and other side-channel attacks: "End-To-End requires user interaction for private operations in normal use, mitigating this risk. Non-user-interaction actions are rate-limited and done in fixed time. End-To-End’s crypto operations are performed in a different process from the web apps it interacts with. The End-To-End library is as timing-aware it can be and we’ve invested effort to mitigate any exploitable risk."

"They directly address some of the arguments against JS crypto:"

And in the very first 5 words, "End-To-End is a Chrome extension" (emphasis mine), they take themselves out of the domain of the original post, which is 90% about Javascript included in web pages, not Javascript in general.

From the page: "For the rest of this document, we're referring to browser Javascript when we discuss Javascript cryptography."

Browser extensions are still part of the browser. They still have DOMs, for example.

Maybe the article is trying to limit itself to talking about "Web page JavaScript". But if so this isn't very clear.

While I agree with most of the article, in light of recent events (blanket surveillance, Heartbleed), an argument could very well be made in favour of using JS crypto -- not by itself, but in addition to TLS.

> The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography.

Scenario 1: You run a note-storing service as described in the article. Your site is delivered through TLS, using one of the non-DHE ciphersuites. You used your CA's web interface to generate the certificate and private key, because it's simpler than generating your own CSR. An NSL with an attached gag order requires your CA to submit all generated RSA private keys to the NSA and shut up about it.

Without JavaScript crypto, the NSA is able to passively decrypt each connection and collect all of the notes' contents.

If you add JavaScript crypto (again, as described in the article), they need to either mount an active attack (which might get discovered) or find a flaw in the JS crypto (which is very much possible -- but adds work).

Scenario 2: You run any site with a login prompt. You are using OpenSSL.

Without JavaScript crypto: during a brief window of two years or so, attackers may be able to exploit Heartbleed and read unencrypted passwords that were submitted by users.

If you add JavaScript crypto, attackers may be able to exploit Heartbleed and read HMAC-digests of recently submitted user passwords. Other sensitive data might, of course, still be disclosed.

Just putting this heresy out there to see what happens:

I wouldn't want anybody betting their life or freedom on browser based crypto, but if widely adopted and well implemented, maybe it is good enough to make mass surveillance significantly harder? If everybody's crypto code is being tampered with, somebody is likely to notice.

Browser cryptography has its place - the simple fact is that PGP is too hard for everyday users in its current guise (your parents are not going to go to key exchange parties). And, requiring users to install binaries on their computer (which can feasibly come with auto-update) to use encryption has just as high a risk of being subverted by somebody with malicious intent who is trying to target the communications of a particular individual. (Which is harder: acting as MITM in a company's internet traffic, or breaking into the target's Windows desktop?)

Mass market encryption is a worthy goal, and browsers are a reasonable platform to achieve that goal. Educating users about not installing extensions that can filter pages in their browsers, recommending incognito mode, etc are good steps. And of course TLS should be considered a minimum requirement for transport. Browser encryption is never going to be military grade, but it's a step up over unencrypted communication.

PGP being "too hard" doesn't mean that browser crypto "has its place". As I said upthread: if there's a "#1 most common fallacious reasoning strategy" for browser crypto experts, it's this one: "we really need browser crypto to work, therefore it works". No.

How about the argument that although browser crypto won't render your communications reliably secure, widespread adoption could make it significantly harder to transparently implement mass surveillance?

I'm not sure if this argument is actually flawed, or just anathema to grown up cryptographers who prefer hard maths to wishy washy politics/economics?

Please, cite any gain that gives over TSL.

And if you plan to implement it over unencrypted connections, no, nobody will notice if the NSA does mass interception of those. There is no way to notice it.

> Please, cite any gain that gives over TSL.

Over TLS? The gain would be that the server doesn't have to see the plaintext. Or at least in order to see a lot of people's plaintext, it will have to get away with sending bad crypto code to a lot of people, some of whom are likely to notice.

> if you plan to implement it over unencrypted connections

I'm not particularly planning to do anything, but if I was I imagine I would be doing it over TLS.

> ... in order to see a lot of people's plaintext, it will have to get away with sending bad crypto code to a lot of people, some of whom are likely to notice.

I would say that the overlap between the people that are likely to notice (or, for the matter, that are likely to read the reports from those who notice) and the people that benefit the most from transparent, client-side crypto is close to negligible.

The NSA is routinely MITMing connections. How will browser side crypto help if they get to decide what code your browser runs?

The NSA is routinely intruding on desktop installs. How will client side crypto help if they get to patch binaries on target computers?

It's harder to intrude on desktop installs. Any anti-NSA strategy should be predicated on a cost basis.

I don't agree. By "has its place" I don't mean "appropriate for all" - I mean a step up for general security of communications.

For mainstream users (who shouldn't even be exposed to the concept of keypairs, since it blows their minds), you can give them a binary client and trust that their desktop isn't compromised, or you can give them a TLS connection and manage your server security.

PGP being "too hard" doesn't mean that browser crypto "has its place".

While I've always enormously respected your opinion, you seem to have a very black/white perspective, aggressively attacking and/or belittling anything that isn't an infallible solution, even if it's a significant improvement for many or most scenarios.

You don't seem terribly pragmatic in many cases. I realized that long ago in a discussion on passwords and the browser - https://news.ycombinator.com/item?id=2000833

So many password incidents since would have been a complete non-incident with such a solution (or anything similar), so I've always remembered your raw negativity because it wasn't a perfect solution: That because it didn't solve every possible issue, it's better to solve no issues.

What's the point of a fallible solution? The only people who actually need cryptography need infallible solutions. Everyone else can rely on security by obscurity--rot13ing their text will be good enough for them.

What's the point of a fallible solution? The only people who actually need cryptography need infallible solutions.

There is no such thing as an infallible solution. Start with that. When you're talking about actors who are exploiting RNG weaknesses, broaden your horizons a little.

PGP needs a key file. You need to (optionally) enter a passphrase. All someone needs to do is steal your key file and circumvent your passphrase (either of which there are countless mechnanisms to achieve. They aren't trivial, but if we're talking about organizations that are taking advantage of imperfect RNG generators...) and boom, PGP has been rendered a false sense of security over the history of your communications. I mean, if we're talking about rogue actors taking over servers and injecting false script, such a situation is just as viable.

Everything is on a gradient. Any simplification (such as "fallible versus infallible") is just garbage time.

Again, and I realize Ptacek is a bit of a hero around here, his words above question, but I go back to his response to that password thing, which was the moment I understood the disconnect between big security talk, and actual security. When the alternative is (and continues to be) nothing -- which is exactly the case in the password discussion -- discarding options because they don't cover every scenario is absurd. It is grossly destructive, just as it's destructive to discredit PGP because it requires access to a keyfile.

I think the difference is between attacks that are currently well-known, and are automatable (attacks that are, effectively, on the cryptosystem itself)--and attacks that boil down to social-engineering/rubber-hose cryptanalysis (attacks that are, effectively, on you.)

Or, to quote cperciva's talk (https://news.ycombinator.com/item?id=7883707), "the purpose of cryptography is to force the US government to torture you." If a cryptosystem makes torturing you for the required information easier than attacking the cryptosystem itself, the cryptosystem is "strong enough." Any system for which this isn't true isn't doing its job.

Broken crypto is not an improvement over no crypto.

Javascript crypto solves no problem at all, and can break better encryption implementations if you use both. At best, it's useless.

Giving users a false sense of security is absolutely worse than making sure they know there is no security.

EDIT: I remain convinced that it is impossible to have any credible security discussion on HN. Everyone just clucks and spouts rhetoric, the Ptacek fan club in full effect.

No security solution is absolute, but instead every security approach is a "false sense of security" from some perspective, and every solution is steps on a gradient. Anyone who thinks otherwise is simply naive.

Do we eschew SSL rather than give users a "false sense of security"? There are many potential vulnerabilities to SSL, from stealing private keys, to co-opting or compromising root authorities, to more esoteric mathematical vulnerabilities.

Yet we still use SSL. We find a pragmatic medium where we achieve the greatest security possible within realistic confines and restrictions, understanding that there are (and always will be) potential weaknesses, and improve as we go. Simply tut-tuting and rote repeating the "false sense of security" nonsense does literally nothing for anyone. It is pseudo-enlightened babble.

Though I don't agree with all of it, I like this comment. Your SSL example is excellent. In the context of online shopping, SSL may protect your credit card number in transit, but it can't protect you from keystroke loggers that intercept the information before it is ever encrypted. Yet, the industry still uses it and people are consistently compromised by malicious software.

On the other hand, I'm still learning about crypto, so I find these conversations illuminating. Thinking about ways to attack security, or trying to understand how other people approach attacks has given me a far deeper understanding than merely reading about it.

However, that's a personal bias because being critical of what I learn is how I learn. Needless to say, I used to routinely drive teachers crazy...

People tsk-tsk'ing security systems as "that won't be perfect, so it will just give users a false sense of security" has been going on for over 20 years.

Sometimes it's right, sometimes it's not. I remember being yelled at that anti-spam products were stupid because "I can't stand losing even 1 mail out of 10,000!!" (In reality, email is a lossy system that doesn't have four 9's of reliability anyway.) I heard the same thing about all sorts of products from people in both industry and academia only to watch them become significant parts of the IT world.[1] Nerds like black/white answers, especially about areas where they aren't experts. It makes the world much easier to understand.

(And I think it was a stroke of marketing genius for Zimmerman to call his product "Pretty Good Privacy" instead of anything implying perfection.)

So I have sympathy for your position here.

Still -- the JavaScript runtime is a hell of a place to try to encrypt anything. If your only hope is to add an obfuscation layer, call it that explicitly, and expect support to fall off. Adding more pieces to a system can make it less secure.

[1] and I still regret not following up with Dave Mann about his attempts to create a numbering system for attacks because someone else at the table said the AV vendors would never go along

This is a terribly flawed argument because we do not in fact accept "many potential vulnerabilities to SSL". On the contrary, we declare vulnerabilities in SSL that require gigabytes worth of observations for a single target to be "hair on fire" emergencies.

On the flip side, we accept staggeringly obvious flaws in the security model of "server-proof" Javascript crypto --- flaws like, "despite what the server operators claim, they can at any time steal secrets from their users --- flaws that have in the past been exploited by law enforcement. Not only we do accept them, but people like you get pissy when people like the "Ptacek fan club" on this thread point them out.

So, now that there's a fan club, do you have a mailing list or get togethers?!

When SSL gets broken, there are built in mechanisms that generally demonstrate 'something' is being screwed with.

With JS crypto, that goes away - if the JS is shipped over the wire, it can just be silently replaced.

If you're relying on SSL to protect the JS crypto, why bother with the JS crypto in the first place?

When vendors ship some kind of client-side pre-built verification capability (i.e. code that DOES NOT come over the wire), much like the existing SSL stack, things will change dramatically.

your parents don't have to use key signing. they can just mail each other their public keys or upload them to a key server. that covers the encryption aspect of pgp. they can verify their identities by sharing secrets but I would expected them to easily notice right away if they are talking with the wrong person...

Parents / mainstream users don't even understand the concept of keys. Keypairs are a technical detail that everyday (/non-apologist) users should not be exposed to.

Bollocks. Users can be educated. Throwing our hands up and saying that users are too stupid to understand crypto so we have to implement it in ways that don't work is advocating wasting everyone's time. Imagine if the car industry had decided that users wouldn't be able to understand car keys?

You can't teach people to do something less convenient than what they're already doing. Car keys are more convenient than never letting your car out of your sight outside of your own garage (which is what people were doing before car keys.)

PKI, as currently imagined, is less convenient than just speaking freely and feeling a nagging sensation that someone might be listening (which is what people are doing now.)

The difficulties in PKI are not that hard. Most of the problems are social, such as the belief that any implementation of PKI must be absolutely 100% bulletproof. Corporate deployments of PGP with no passphrase make the difficulty of executing spear phishing attacks considerably higher (especially when you consider that these are the usual way of getting initial access to corporate systems).

Not bollocks at all. Non-technical people shouldn't be expected to understand implementation details in order to use a system. Key exchange can be done without exposing users to the details of the system.

While I think it's best left assumed that they won't, I definitely think that non-technical users should make an effort to understand things like the basics of a key exchange, for their own sake. After all, they are relying on these systems to exchange sensitive information. In general I think people should be more careful about handing out personal information.

People use SSL every day and I bet most of the population, software engineers included, don't understand it. In fact most of the population probably don't even know that they're using SSL.

Email addresses are an implementation detail.

This is good -- choosing the right language is important. Cryptographers seem to be worse than average software engineers when it comes to designing interaction that is usable by mainstream users.

The thing that perplexes me about this whole debate is that some seem to think there's no inherent trust between parties exchanging encrypted messages.

That seems to be the first important thing to establish here. Alice sends public key to Bob, Bob encrypts the data using said public key and returns the encrypted data to Alice. What guarantee does Alice have that Bob didn't turn around and also post the plain text of Alice's request to [name your favorite social network here]? By the same token, what guarantee does Bob have that Alice won't turn around and post the plain text of the response on [name your favorite social network here]?

So it seems that inherently, even in the most secure of communications, there has to be trust between the parties exchanging the messages.

Please forgive me if this is already being discussed. The thread is rather long at this point so admittedly I didn't read every part before posting this.

EDIT: for correctness, cleaned up references to bob / alice.

So to frame the question in this light, the parties exchanging encrypted messages via javascript (on top of SSL) trust that they're doing everything in their power to maintain the integrity of their systems. Do they increase the surface area where attacks could take place? Sure. But in what way is sending encrypted messages on top of SSL any less secure than sending unencrypted messages on top of SSL.

One argument seems to be that there's no need for the added layer on top of SSL. Then I would ask, so why have passwords at all? After all it's guaranteed that the communications are secure. It looks like the disconnect here is that encrypted messages on top of SSL are behaving as a mechanism to help authenticate the recipient of the messages, like a password, and not trying to act in the same way that SSL behaves as a way to secure communications between 2 parties.

What about browser bugs?

I'd like to suggest that the software agent that has access to your private keys should NOT be in the same process as an agent that directly handles data from the internet. The reasoning is simple: defense in depth, and the principle of privilege. Really, this is no different than the idea of running a web server in a chroot(2) jail.

A proper solution that might be something that can be provided in some sort of browser extension (*though it's likely to have platform-specific requirements) is to simply call gpg as an external process. It shouldn't be hard to wrap that up in an API provided by the extension.

Of course, it would be even nicer if the browser's provided that feature directly, as they could utilize platform-specific features such as a secure-password-entry UI (e.g. pinentry)

I agree. It should talk directly to gpg-agent. That way, it would automatically have support for things like the Crypto Stick and smartcard readers, and wouldn't necessarily have direct access to the keys.

>WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT CRYPTO CODE? You can. It's harder than it sounds, but you safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography. Meanwhile, the Javascript crypto code is still imperiled by other browser problems.

NSA has got the keys.

I'm an activist in a regime and extra level of indirection may slow them down.

This doesn't need a long post to explain. There is a single principle which makes all browser-based JavaScript crypto completely insecure forever:

Never trust any input, nor run any code, delivered by a remote host, unless its original author's digital signature has been verified by multiple distinct trusted mirrors, and as long as there is no way to change this input or code without repeating the process.

What about using the Stanford JS Crypto library through a Google Chrome plugin (in addition to SSL) for an added layer of security?

Edit: Reading on, I see that the author addressed SJCL and browser plugins separately, but I still think that most of the problems raised in the article could by solved by using them in conjunction. That gives you a secure channel for delivering peer-reviewed encryption algorithms.

All attacks against JS crypto addressed assume a somewhat determined and active attacker. Although this is definitely the big problem to be concerned about, that does not mean that there is no place for JavaScript cryptography as protection against casual passive eavesdroppers. The problem is that users are quick to get a false sense of security because of this, not that JavaScript crypto is utterly without use.

Having said that, the state of web cryptography is abysmal. There is work on a Web Cryptography API [1], but so far browsers only implement the part that concerns the generation of cryptographically secure random values. Even then, an attacker can use injected JavaScript to override the window.crypto functions, so they would need to be protected somehow.

The lack of secure crypto prevents a whole class of applications from coming to JavaScript. Luckily my bank offers native apps, so I can just recommend my parents to use that instead of their website.

[1]: http://www.w3.org/TR/2014/WD-WebCryptoAPI-20140325/#scope-al...

>assume a somewhat determined and active attacker

Cryptology is founded on an academic, mathematical, and logical basis. Your don't assume your attack is active or determined, you assume that any flaw that exists could and will be exploited against you.

The founding principle of modern cryptography is to paraphrase: "If an attack knows the entire system, but if they lack the key they cannot recover the data."

>that does not mean that there is no place for JavaScript cryptography as protection against casual passive eavesdroppers

Yes, it does. It means you are no longer using cryptography and simply doing obfuscation at that point. Mathematical crypto is out the window and gone, you admitted it. Your goal isn't crypto, but stopping casual and passive eavesdroppers. So you might as well use DES, or RC2. The algorithm doesn't matter any more. Its just a question of defining casual.


My post is regards to client side webpage crypto, not application level node.js crypto.

Diffie-Hellman also isn't secure against determined and active attackers (with power on the network). What you say about cryptology is utter nonsense.

> Your goal isn't crypto, but stopping casual and passive eavesdroppers. So you might as well use DES, or RC2. The algorithm doesn't matter any more.

Nonsense, it's not obfuscation. If it was obfuscation, people logging the sent data would have a reasonable chance of decrypting it. If you use JavaScript to AES encrypt things (with a good password), passive eavesdroppers cannot decrypt it later without breaking AES.

The weakness of Diffie-Hellman is authentication not the algorithm itself.

>Nonsense, it's not obfuscation.

The issue with in browser JS you seem to be completely missing is if you can't trust the network to deliver your message. How can you trust it to deliver your crypto unchanged, un-tampered, etc?

The answer is you can't. If you create a system where you can, you render your crypto unnecessary.

1. The weakness of plain Diffie-Hellman is that the data you receive (the other part of the key exchange) can be from someone you can't trust. You shouldn't use plain Diffie-Hellman if your attack model includes active attackers on the network.

The weakness of JS crypto is that the data you receive (the JS crypto library) can be from someone you can't trust. You shouldn't use JS crypto if your attack model includes active attackers on the network.

See what I did there?

2. Under the attack model of the passive eavesdropper, the assumption is that the crypto works because the passive eavesdropper does not tamper with JavaScript. Was this not clear from my previous posts?

>[...]can be from someone you can't trust.

So authentication? We are talking about Authentication right? Because I said that. The weakness of DHE is that you can't authenticate who are you are talking too.

>You shouldn't use JS crypto if your attack model includes active attackers on the network.

A crypto model has to include every possible attack vector. Crypto isn't something that happens in a vacuum. You assume the worst always. If you ever say, "No, nobody would go that far" you aren't doing crypto, your doing obfuscation. Once a message leaves your hand, everything that can go wrong, will go wrong.

In crypto research people suggested moving away from SHA-1 because an attack was found, not because it was remotely possible but because it logically and mathematically existed. Crypto is about numbers and logics, not models and cost expense analysis, and what an attacker could do.

Obfuscation is a game of frustration where you create a bar, and keep raising it. Your system is fundamentally broken, but you count on people getting frustrated, or bored. Before they break it. Its game of wits.


The system you're talking about isn't crypto. Its obfuscation. Your counting on somebody not moving beyond A,B, and C. Getting frustrated and giving up. Crypto doesn't do that.

The weakness of JS Crypto is that you cannot authenticate the source of your crypto library so you cannot make sure it works properly. The problems are equivalent. All your ranting against what I'm saying is also directly applicable to plain Diffie-Hellman. Both simply are not secure under the active attacker model. That does not disqualify plain Diffie-Hellman from having some use (although not under most attack models), and that does not disqualify JS crypto from having some use (although not under most attack models).

I'm going to respectfully decline discussing what attack models are and their use in security protocols on HN, as you unfortunately appear to not understand what I mean (which is probably my fault) or what attack models are used for in security research. You also appear to be confusing security research and cryptography research. You can mail me if you want to discuss this further.

>The weakness of JS Crypto is that you cannot authenticate the source of your crypto library so you cannot make sure it works properly

One of many, this isn't the only issue. Even if you authenticate the host what prevents tampering via MitM attacks?

> All your ranting against what I'm saying is also directly applicable to plain Diffie-Hellman

yes and no. Standard white paper Diffie-Hellman yes. But implementations can fix that. As you point out.

>That does not disqualify plain Diffie-Hellman from having some use


>and that does not disqualify JS crypto from having some use

Wrong. Because you keep dodging my main point. Anything that solves the problem of:

1) Is who your talking to the right person (authentication)?

2) Is your code arriving unchanged (authentication + encryption)?

The two main issues with JS crypto. Solves what ever problem your trying to solve with JS crypto. And any solution that allows you create secure JS client side crypto, fundamentally makes it obsolete. That's the problem I'm going on about.

> yes and no. Standard white paper Diffie-Hellman yes. But implementations can fix that. As you point out.

Well, there we have it. I happen to have worked on a project where plain Diffie-Hellman was used not long ago. The active attacker model was genuinely not a threat/solved by other methods in this particular context, and simple confidentiality provided by a plain Diffie-Hellman key exchange was enough. As another example, plain Diffie-Hellman is also used in SSH sessions without predistributed keys [1]. Clearly, the presence of active attackers is not always a problem in the real world! If that is the case, then there also exist legitimate uses for JS crypto [2].

As a sidenote nitpick, no implementation of plain Diffie-Hellman can fix the authentication issue. There are instead variations built upon plain Diffie-Hellman (notably authenticated Diffie-Hellman) that solve the authentication issue. But these protocols are not the same as plain Diffie-Hellman.

[1] RFC 4253, section 8

[2] I feel that I should stress again that although this is true for these particular use cases, JS crypto should not be trusted in general.

>Well, there we have it.

This is a slippery slope argument. We solved the issue with X therefore we can solve it with Y. The problem is when you solve the issues JS browser side crypto has, you render it unnecessary.

If you can.

1) Authenticate the message is unchanged. I.E.: Assures libraries you are using are secure.

2) Authenticate who your talking too. I.E.: Assures the host is the correct host, not a MITM attack.

3) Ensure the message is unreadable in transmission. I.E.: Insure any eaves droppers are not loging you.

What use is there for client side JS crypto?

You just solved every issue with 2 way encrypted communication. Just build it into a tool kit and you render JS clientside crypto unneeded.


Also to outline, compiled JS node.js application level stuff is useful. In Browser JS crypto is bad, completely untrustworthy and broken.

> What use is there for client side JS crypto?

Uploading encrypted files over HTTP. A JavaScript SSH client[1] for throwaway servers. Client-side password hashing to prevent passive MITMs to find raw passwords. These are not as secure as their native counterpart, and I don't claim they are. Nor do I claim they are resilient to active MITM attacks in the JS delivery. But they are protected from having their data logged in plain by passive MITM eavesdroppers, and that means that they may be useful to people who are aware of the security they don't have but appreciate the convenience of what they do have. This is all I'm saying.

Just like OpenSSH without server keys is vulnerable to MITM attacks, but still both useful and protected from eavesdropping. Just like people use plain Diffie-Hellman to negotiate a shared key, despite lack of authentication. Sometimes the active attack is simply not considered a problem, or the risk is simply accepted [2].

This is not a recommendation. This is a discussion about people having different requirements and uses. Almost all JS crypto used on the public internet does it wrong. But that doesn't mean that JS crypto is 100% wrong in any conceivable case. Most people using it are just stupid.

[1]: https://github.com/liftoff/GateOne

[2]: Choosing which risks to accept or not is daily practice for anyone dealing with security issues.

>Most people using it are just stupid.

Attacking the strawman. No one man will revolution JS crypto. Not you, not Turning. Mathematicians are involved, when they all agree on stuff generally they're right.

The systems your proposing do nothing. And you know they do nothing.

>that means that they may be useful to people who are aware of the security

But you don't care. Its a false sense of security. Your selling rice paper advertised as bullet proof vests. That's my problem.

Yes these programs are entertaining thought experiments, but in the real world they're useless. And if you advertise them as being anything other then useless, you're no different then an 1860's snake oil sales man.


Also you mentioned assumed risk. Assumed Risk is understanding that there exists OpenSSL exploits you don't know about, you have no way of fixing, and no way of preparing for. You have to assume this risk to achieve any level of security. You have to be mindful that the landscape of security can change at any moment, and maybe render you insecure.

Assumed Risk does not mean your using a broken cryptosystem, and you know its weak points that exist BY DESIGN. And are fundamentally impossible to patch out. That's just called broken crypto.

Please stop pretending I'm advocating JS crypto. I did no such thing, and you are being extremely disingenious and rude for suggesting that.

Sometimes you don't need the data to be decrypted on the host.

Your crypte is still useful if the host doesn't want to decrypt anything. Javascript crypto as an alternative to SSL makes no sence, an implementation in which the server never gets the plaintext is currently only possible with client side javascript crypto.

I agree that any discussion of using JavaScript-crypto instead of SSL is nuts and likely a strawman complaint.

But just because it's "the best we can do," that doesn't mean it's a good idea.

EDIT 1 minute before I made this comment 'tptacek said the fallacy better elsewhere on this page "we really need browser crypto to work, therefore it works"."

Javascript crypto added to SSL is a great way to let your errors insert vunerabilities on an otherwise secure page.

> assume a somewhat determined attacker

Most CVEs that come to mind assume a somewhat determined attacker.

It takes a reasonably determined attacker to commit to rails without permission [1] or run a ten-line perl script to crash a server [2] too.

Waving away a problem via "the bad guys would need to think for more than one second" is not exactly reassuring.

[1]: https://github.com/rails/rails/commit/b83965785db1eec019edf1...

[2]: http://www.ocert.org/advisories/ocert-2011-003.html

JavaScript crypto is simply not a protection against that attack model. It is protection against the attack model of the passive eavesdropper. That's all I'm saying. I completely agree with what you say.

Whether you view that attack model as something worth considering depends entirely on context. But it's a valid view for many applications. As long as people don't use it with any expectations of security under active attack models, I'd say that's okay.

> protection against the attack model of the passive eavesdropper

My argument would be that trying to protect against passive attackers with JS adds nothing beyond what SSL already offers.

Which is already required as a matter of course, and already compromises the payload if SSL is broken (again).

Sure, TLS probably covers many use cases. But if protection against overzealous public Wifi spots is your goal, or uploading encrypted files to something like Mega over HTTP, there are still uses that TLS cannot cover.

Again, I am not advocating JavaScript crypto. I'm just putting it in its place.

That's not true. Client side encryption allows the server to never have to touch user-generated content, which makes defense against subpoenas (and certainly blanket wiretapping) more feasible. It doesn't defend against the NSA who want to target a particular individual, but it's better than storing all content encrypted with the same known key, or storing it unencrypted.

Which circles back to the "Javascript is hostile to cryptography" point the article makes; I welcome any expert-audited JS libraries that can accomplish secure file encryption, for example. But even assuming this blocker is overcome, any illusions are shattered by the F5 key.

As pointed out elsewhere in the thread, there are few attacks that allow you to listen in on an SSL connection's content without also allowing you to modify that content - say, with a version that pastebins your keys.

Hence my argument that JS cannot provide anything SSL lacks, plus or minus some wishful thinking. Combine this with the fact that it's impossible to protect against a MITM-modified JS payload (see the "chicken-egg problem" portion), and you have a rather uphill battle here.

My colleagues who do this more than I say that while it's possible, it's rare for an attacker to be able to sniff traffic without being able to interfere with traffic.

Not true, depending somewhat on your attacker, but there are plenty of ways to read emissions from networks which give you no opportunity to modify the traffic.

On the other hand, plenty of people who can modify the traffic exist. Starting with every employee at your ISP and the Amazon picker who shipped your router.

> Waving away a problem via "the bad guys would need to think for more than one second" is not exactly reassuring.

The NSA are well resourced, but I doubt they have much more than a second spare for every internet user on the planet. So speaking as a relatively uninteresting person, I actually would find that somewhat reassuring. If you're a Person Of Interest, that's obviously a very different situation, and you shouldn't use browser-based crypto.

You're right to point out the difference between targeted and general attacks, but you're also misrepresenting the problem. An attack on SSL can take quite awhile to research and implement, but then can be used widely. One would expect the exact same scenario to play out with any javascript crypto library, as it does with all software.

So, yeah - if you rolled your very own library that's unique to this planet for exactly one website, congratulations! You're secure as long as there are no attackers! Doesn't really say anything useful about your security though. Or about the viability of using a fundamentally broken crypto platform to do crypto.

I disagree with an assertion. There is still a reason to use <client side web-based cryptography> if TLS is enabled: The first scenario where users want to store something encrypted on a server without the server having the keys.

Now, I'm not defending JS as a method of doing this, but there is a use case.

I was under the impression that client side crypto was always a bad idea, is this really news?

End-to-end encryption basically requires crypto to be done on the client side. But there are different kinds of "client side."

The JavaScript sandbox is basically impossible to lock down. The plug-in that Google is developing is a much different beast.

The problem with cryptography is that if a human is supposed to remember the key, the key is probably weak and could be brute-forced, and if a computer remembers the key, the computer could be compromised and the key stolen.

To me, that's the biggest issue.

I would echo a lot of the comments here criticising the article title - it is (by its own admission) referring primarily to the environment in which Javascript is usually run (browsers), with very little about the language itself.

So the use of the word "Javascript" in the title is very deliberately misleading.

As for the rest of the title, http://meyerweb.com/eric/comment/chech.html

Title aside, the article is informative content-wise, but I'd personally be much more interested in an actual honest review of cryptography in Javascript the language - one that, for example, didn't choose to exclude NodeJS (or similar server-side tech). Can JS on the server do cryptography right? If not, why not?

This topic deserves a separate entry but cf. https://news.ycombinator.com/item?id=7900597

Ok, now what? There is clearly strong demand for cryptography in client side web applications. Telling people "don't do that" isn't good enough, they're going to do it anyway. How can we make it better?

One idea: a way to bundle, hash, and optionally sign a set of HTML/CSS/JS resources (not unlike a Chrome extension). If the bundle is updated the user can be prompted. If the user desires, they can check to see if trusted individuals or groups have already reviewed the code, or review it themselves. Perhaps the code is hosted on Github (or wherever) and people can comment on questionable changes there.

I agree that yes there is a great deal of demand for crypto on the client side and I don't think privacy on the Internet is going to move forward without a reliable way of doing cryto in that sphere. However I agree with the poster that client side JavaScript is not the answer to that demand. At this time it's a leaky boat and can be exploited.

A good point to start is this comment: https://news.ycombinator.com/item?id=1951556

These seem like pretty weak arguments to me, especially if you were able to restrict execution to resources in the bundle (maybe that means no "eval" -- fine)

Most languages permit similar amounts of runtime dynamism, they just don't make it as easy as JavaScript. Actually, in some ways JavaScript is better than other languages in this respect:

* Using closures to encapsulate "private" variables is pretty bulletproof, AFAIK.

* ES5 features like "Object.freeze" and "strict mode"

* Object-capability safe subsets of the language, e.x. Caja, SES, etc

* CommonJS-style module system

What about JS Cryptography over SSL? Another security layer to protect against those who can break SSL, or is it just useless?

Useless. If the SSL is broken, the attacker can see through to all the messages the client is being sent, and decrypt whatever the client can decrypt.

Here's the thing about why JS-in-webpage crypto is fundamentally useless. The purpose of cryptography is to, somehow, secure some sort of communication between two parties. Maybe it's as tiny as a zero-knowledge proof, maybe it's a full-on encrypted general-purpose channel like SSL/TLS provides that can send arbitrary data back and forth in real-time, maybe we're trying to send small proofs of a message's integrity via hashes or signatures, maybe we're sending messages from our past self to our future self and want it safe in the meantime "at rest", but crypto is always about some sort of communication between parties A and B (the traditional Alice and Bob, even in that last case Bob is just an older Alice wearing a mustache).

Web pages run in a sandbox in which they fundamentally become part of the server's environment briefly. They're just an extension of the server, and are essentially designed from top to bottom to ensure that the only thing a web page can access is what was sent by the same server. The only cookies a server can get are the ones it set. (Modulo some cross-internal-domain stuff, but it doesn't change my point here.) The only requests it can manipulate are the ones it sent. The only resources the web page will access are ones the server tells it to. (Yes, a server can direct you to resources on other servers, and that's actually a big deal, a big delegation of trust, and one of the trickiest corners of browser security.) The web browser forbids a page from the server from accessing the hard drive on its own, or accessing any other site's stuff, etc. Structurally, the web page is just an adjunct to the server, by design, and anywhere this property fails is considered a security hole and fixed as quickly as possible. Without any ability to access any local resources that were not themselves originated from the server (i.e., HTML local storage doesn't get you out of this, the server fully controls it), the web page has no independent identity to assert. It is totally in thrall to the server.

Note how I kept saying "web page", and not "web browser". There's a big difference; web browsers are allowed to do a lot of things a web page is not. Pages have a distinct execution context and security policy different from the browser.

In the crypto sense, there's no communication between a "web server" and the "web page"; it's all just one system. The web browser may use SSL/TLS to communicate the necessary information to form a web page, but once that is done, the web page is deliberately put into a context where it is just an adjunct to a server, and there's no distinct two parties with which to communicate anymore, from a crypto sense. This may seem counterintuitive, because we see messages flowing back and forth, but that's all internal chatter, the browser functioning as an internal bus for the page/server unified security context. We have turned the full power of our cryptography and security research into making web browsers that enable web pages to function as extensions of the web server. It was and is not easy.

Further, what defines "server" is not the human word or concept, which can be distracting. What defines the "server" is whatever finally produced the bytes that were used to create the web page. If your browser isn't using TLS/SSL, that turns out to be pretty must "just whoever felt like serving some bytes to you". (If you don't think that "intercepting web requests" is practical, it is. It's off-the-shelf tech for hackers. Do not assume it is hard.) On the other hand, if SSL is properly used (and let's skip over what that means and the validity of cert authorities, etc), then you do have assurance that the bytes came from your server without anybody in between, and the web browser is providing assurance to the web page that it is on an uninterrupted channel.

When not using browser crypto, it just doesn't matter how you spin around; the attacker owns the web page, and you can't do anything about it. And when I say "own", I mean it, fully, the web page is actually functioning as an adjunct to their server, and you're stuck with the results. It doesn't matter what crypto you think you're pushing to the user's browser, because what's actually happening is that the attacker is pushing their crypto to the user, and its relationship to what you inteded to push is entirely and solely up to their good graces, and by definition we're pretty much talking about people without good graces. Without SSL, you have ZERO control over the user's webpage, and the attacker has all of it. Unsurprisingly, there's no crypto system that can survive that restriction.

When using browser crypto, SSL/TLS is providing you the maximum degree of assurance that is possible already that the channel is secure, more or less to the maximum extent the network will permit (i.e. attackers can observe byte flows or who you connect to and there's not much that can be done about that). The argument that in-page crypto is useless amounts to the observation that this binary situation admits of no "threading the needle", especially in light of the fact that without SSL being used we pretty much get to assume that an attacker can do absolutely anything to the data between the user and your server, and it's pretty hard to construct a crypto system that can stand up to the attacker arbitrarily manipulating it on the way to user, which is what many people here are trying to do.

(Incidentally, clearly understanding the difference between the web page and the web browser is also important for understanding why this is particularly a problem for web pages and not so much other things. It's because web pages go through so much effort to run in a sandbox such that the server doesn't get any additional permissions it shouldn't have via sending malicious web pages.)

(There's a better blog post in here struggling to get out; this is my first attempt to put this in words. The server probably needs a page/server distinction too, for instance; the web page isn't running with the full "server" privileges, of course, it's actually also running as a "page" sort of thing too, where the browser and the server are collaborating to create a single unified security context hosted within the two of them.)

Even if it's some kind of Diffie-Hellman JS crypto?

I submitted a Show HN yesterday what I believe a potential legitimate use case for javascript crypto (or more specifically, in-browser client-side crypto):

Bring Your Own Filesystem (https://github.com/diafygi/byoFS)

Example chat demo: https://diafygi.github.io/byoFS/examples/chat/

It seems like an unhosted-style app (unhosted.org) can mitigate all of the OP's concerns.

> Secure delivery of Javascript to browsers is a chicken-egg problem.

I address some this in the byoFS README[1] and again in an /r/crypto discussion[2].

Since the webapp is unhosted, the webapp is built to work when being served from anywhere, including your local filesystem. This means that you could download the webapp anonymously, inspect/audit/checksum it, then run it from your local filesystem in the browser (try it! just right click and Save As...). Alternatively, you could load it from a server that you would trust to kill itself rather than comply with a secret court-ordered compromise (e.g. Lavabit, Internet Archive, etc.).

Additionally, since the webapp is just static files, all the webapp server sees is anonymous requests (over https to prevent MITM). It doesn't know who is requesting the static files, so it would be difficult to perform a targeted malicious injection. You would have to broadcast the injection, which is generally ill-advised since it might be spotted by a vigilant third party. Most of the surveillance injection attacks that have been leaked have been targeted, so this basically cuts off that attack vector.

So, in order to compromise this webapp through injection, you'd have to hack into the trusted static server and blindly serve the injection to everyone who requests the webapp (hopefully including your target). This is basically the same attack vector you'd have to do if you were trying to inject something into a download-and-install local application.

I don't think javascript crypto is universally a good idea, but for this unhosted use case, it can limit attack vectors to the same as download-and-install local applications.

> Browser Javascript is hostile to cryptography.

True, which is why WebCryptoAPI should be prioritized, and I can stop using SJCL in byoFS. Once APIs for crypto primitives are baked into browsers, this argument disappears.

> The "view-source" transparency of Javascript is illusory.

It's certainly much better than inspecting desktop apps. Also, if your webapp is unhosted, you can certainly publish a signed hash of the static files, which can be verified after you download the app and before you run it.

One infrastructure improvement that might be very helpful would be to be able to buy an SSL certificate from a CA that is limited to a particular file hash, which the browser checks before showing the connection as "validated" (maybe a checkmark beside the https lock?).

[1] - https://github.com/diafygi/byoFS#security-and-philosophy

[2] - https://pay.reddit.com/r/crypto/comments/289w7x/bring_your_o...

same submission from 3 years ago: https://news.ycombinator.com/item?id=2935220

I've stopped reading at: "You could use SSL/TLS to solve this problem, but that's expensive and complicated."

This article is all over the place.

First of all calling it "JavaScript cryptography" only to immediately correct itself "oh, we mean browser client-side scripted cryptography, not JavaScript in general, say Node.JS".

Then calling it "cryptography", when all the article talks about is hashing passwords. There's more to cryptography than hashing passwords, and not all of it is susceptible to the attacks described.

So the real title of this article is "Client-side scripted password hashing in browsers less than ideal". I know, doesn't roll off the tongue as sweetly as "JavaScript eats babies", but what you gonna do.

Third, the faulty logic of "this new technique has a bad edge case, so we should completely reject it, despite any benefits".

The article just glosses over the situation where the script is served securely and yet you hash at the client. "We already have a secure connection, anyway" the author claims. Sure, but the server still gets the plain text password, because TLS doesn't hash, it encrypts (and you can decrypt). If the server is passing the login request to a third party to check the hash, turns out it's still a good thing to hash at the client so the intermediating party can't abuse its role, without the horrid UX of OAuth. Has the author thought of that? Nope, just dismissed the potential outright.

Plain old-school TLS/SSL connections have exploitable edge-cases as well. Should we "consider them harmful"?

Intelligent conversations about security require nuanced opinions that go into when something is useful, and when it isn't, and let us make the call if it's harmful or useful for a given project.

Calling something "harmful" outright and only listing the cons without the pros isn't this kind of intelligent conversation. It's just counterproductive scaremongering.

This article has virtually nothing to do with password hashing, but the fact that readers routinely take that away from it is part of why I don't like it much either. We didn't promote it, or post it on HN; I think I posted it to Twitter once, and now people re-find it once a year.

I'll absolutely win an argument with you (or, I think, anyone else) about browser Javascript crypto. It's simply a bad idea. I just don't think this particular article will.

We don't reject browser Javascript crypto "despite any benefits". We reject it because those benefits are illusory. It's clear that browser crypto makes people feel better, but TSA airport security also makes the majority of Americans feel better too.

Hey man, I loved that article, I read it a while ago. It seemed like it had a pretty simple takeaway: JS crypto by itself is laughable, and JS crypto on top of real crypto is pointless.

I do have a question though -- you say that 'you can deliver the JS crypto with SSL, but then it's irrelevant because the connection is secure,' so I'd like to know, what's your opinion of blockchain.info?

It adds JS crypto on top of SSL, and provides a signed browser extension, for reasons that seem to make sense given its use case.

what about distributed as signed browser extensions?

What's that Google is working on, and it's a much better environment. The code lives on your computer all the time, and you (in theory, assuming the proper browser settings) can make sure you are using a constant version of the code that matches up with what other people are using and auditing.

You can't lock down JavaScript at all. Your browser should (in theory) tell you when a plug-in is asking to be updated and give you the option to say "nope."

In theory, you could even walk up to a brand-new (assuming uncompromised) computer and reinstall the plugin. But you would still need some way of knowing that you were installing the same version you decided to trust earlier. Recognizing checksum pictures, I guess?

It can work but the users have to be able to 'read' the code from the server and sign it themselves. Certain hashes can be considered trustworthy and that would require a 3rd party or being able to put those hashes on paper and compare them.

I would say that signed browser Javascript extensions are my least favorite place to viably deploy crypto.

Agreed the article is not very insightful. Another place where it misses the point entirely:

> The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography.

This overlooks a major use case of client-based cryptography where you don't want to expose any private keying material to the server.

Not really. If you don't trust the server to hold the key, you can't trust the client you downloaded from the very same server.

It's not always about trust. Sometimes it's more about the burden of having the keys. mega is a good example of why client side encryption is used. They don't want the burden / liability of knowing what is stored on their server. (they probably know it anyway, but the idea stands).

That protects mega (the server); it does nothing to protect the client.

As the client, I care about my protection.

While you are correct that a server can serve me malicious Javascript encryption code that discloses my keys, there's a big difference: the act cannot be hidden.

If my secret keying material instead is on some company's servers it can be taken surreptitiously (by court order or not).

While it may be mathematically impossible for the server to give you hostile code with no way at all of you detecting it, in practice you are not going to check the JavaScript you download each time. Nor does the browser have any way of doing version control for you, or of verifying what's actually running.

I'm not sure why HN doesn't let me reply to danielweber's comment. Thread is getting too deep?

I admit that in practise you're not going to check the Javascript. But from the point of view of someone trying to get their hands on private material, the difference is huge. If the private material is on a server, a simple court order (or not) can give you access to potentially millions of customer records. But if you wanted to do dragnet collection when client-side crypto is in use, you now have millions of small but non-zero risks to get exposed.

[EDIT: spelling]

No, it does not overlook that "major use case". You're confused between wanting something to work in a specific way, and whether things actually work that way. If there's a "#1 most common fallacious reasoning strategy" employed by advocates of browser crypto, it's that one.

Wow, sensitive topic? No need for ad hominem. Thank you very much.

I'm not sure I see the ad hominem that you're seeing.

Why make assumptions on my motivation? And why assume I wouldn't know what subjectivism is? I think your comment could have been way more useful if you replied to the argument instead.

I agree that the article fails to expose the main issues properly but JavaScript in the browser, in it's current implementation is harmful. It gives you a false sense of security.

The biggest issue is that the crypto code can be changed at any time transparently. It's typically distributed by the same party that has your data so all you're really doing is transferring trust to them.

> Third, the faulty logic of "this new technique has a bad edge case, so we should completely reject it, despite any benefits".

What you are calling an "edge case" is the main feature since the first iteration of the language. That it runs server scripts on the client. And yes, it does mean that it must be completely rejected for crypto code.

If the article gloes over something, it's the fact that those problems are fundamental. They can't be ammended, and won't go away at any time in the future. Looks like it should make this still more obvious.

I guess that's the second article I upvote on HN... At first I tought "No shit! But I'll take a look", but after reading this kind of responses, yes, I think it should get a few days at the frontpage.

Long story short, this topic won't end until XSS gets fixed. The people writing HTML don't consider XSS their problem, its been what 20 years?

Nobody even knows what 'fixing' xss would look like.

Um, the "people writing HTML" have done a lot of work to prevent XSS, by introducing Content-Security-Policy and other HTTP headers: http://ibuildings.nl/blog/2013/03/4-http-security-headers-yo...

The main problem is backwards compatibility, as older browsers don't support them, but the idea that people have their head in the sand re. XSS is complete nonsense.

Wrong. Even in an xss-free web, JavaScript based cryptography in the browser is unsafe against an attacker that can MITM - the attacker can just replace the crypto code being used.

There is no permanent cure for any attack surface in a given technology. Security is an ongoing effort and requires deep understanding of the underlying platform.

If "writing HTML" exposes cross-site scripting vulnerabilities on your site, then maybe "writing HTML" isn't so easy after all.

It's any MITM attack, not just XSS, right?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact