
JavaScript Cryptography Considered Harmful - mazsa
http://matasano.com/articles/javascript-cryptography/
======
utnick
I disagree that just because its possible for a malicious or comprimised
server to send evil javascript, javascript crypto should not be done.

You have to analyze the threat model. Take the note taking webapp they use as
an example. Without javascript crypto, anyone who breaks into the server can
read everyones notes. Likewise, the FBI can get a warrant and have the owner
of the web application turn over everyones notes.

If they do use javascript crypto, instead of just getting a warrant for the
machine, the fbi has to convince the owner of the site to modify their source
code to target an individual user to steal their key. That is a significant
difference. Also, the victim user has to log in again for that attack to be
successful, if they just used the server for a couple weeks and left, they are
safe.

Likewise, an attacker who breaks into the server can't get a dump of all the
notes in the system. They again have to modify the source code which could be
noticed by the owners and users and takes some level of sophistication more
than the average aim and shoot attacker

So is javascript security perfect right now? no... but that doesn't mean it
should be dismissed

~~~
cliveowen
It's not just a malicious server, an attacker can put himself between you and
the perfectly secure server and then send you whatever javascript code he
likes, that's the whole point. And yes, since it cannot be made secure, it
should be outright dismissed.

~~~
lloeki
> It's not just a malicious server

The author effectively _dismisses_ the point being about a malicious server:
especially with "Just use the SSL", because, well, if you're landed on the
server, SSL means squat.

The author is concerned about:

\- the unavailability of secure functions necessary for crypto (as simple as
RNG, as useful as an AES call, or as complex as a full-blown PGP API with
transparent, ad-hoc key management) in browsers

\- the unavailability of a secure, non-monkey-patchable runtime environment
for JS crypto code to execute in, guaranteeing one can use the aforementioned
functions as intended

\- the vulnerability of the code as content in the channel itself (when not
using SSL) or in the browser itself (XSS and all)

All of those being legitimate causes of concerns (IOW in crypto world, gaping
holes rendering client-side JS crypto untrustable)

~~~
cwmma
> \- the unavailability of a secure, non-monkey-patchable runtime environment
> for JS crypto code to execute in, guaranteeing one can use the
> aforementioned functions as intended

That one at least can be dealt with by web workers

~~~
joshuacc
Worker = myMaliciousWorkerImpersonatorFunction;

------
tptacek
Previously:
[https://news.ycombinator.com/item?id=5123165](https://news.ycombinator.com/item?id=5123165)

I think Nate Lawson does a much better job of making this argument than I did:

[http://rdist.root.org/2010/11/29/final-post-on-javascript-
cr...](http://rdist.root.org/2010/11/29/final-post-on-javascript-crypto/)

~~~
mazsa
Sorry, I didn't notice that it was submitted previously: TFTL.

~~~
IgorPartola
I've been thinking that it's time this article make it to the front page. If
this was an old school forum, I'd ask the mods to pin it at the top. There are
just too many damn discussions of JS crypto on here. It's like flowing glass
all over again.

------
jashkenas
I'm curious if anyone who worked on this article would be willing to comment
on what they think of Keybase.io's client-side crypto implementation:

    
    
        > Browser crypto can be scary. Do you have an evil extension 
        > installed? We can't tell. Further, have we been tortured 
        > into serving you custom, targeted JavaScript? Hopefully 
        > you're not that important.
    
        > So: only use this page if (1) you feel your browser is 
        > clean and (2) a life doesn't depend on it.
    

[https://keybase.io/docs/server_security](https://keybase.io/docs/server_security)

~~~
mike-cardwell
If you use the web based crypto from keybase.io, your key can be compromised
if they are compelled to do so, or they are hacked.

The good thing about keybase is that they also provide a cli tool for
interacting with the service, so your private key never needs to go near a
website.

I personally use a smart card and reader, so even the cli tool couldn't read
my private key if it was compromised.

I had a keybase.io account for a few days and then deleted it recently. It
seemed pretty nice, but then they sent me some invitations and it dawned on me
that I don't know anyone else who would use it, and it adds even more
complexity on top of the existing system, so isn't going to be that useful for
newbies either.

------
haberman
Since this article was written, Google released an alpha version of a JS-based
crypto tool: [http://googleonlinesecurity.blogspot.com/2014/06/making-
end-...](http://googleonlinesecurity.blogspot.com/2014/06/making-end-to-end-
encryption-easier-to.html)

They directly address some of the arguments against JS crypto:
[https://code.google.com/p/end-to-end/](https://code.google.com/p/end-to-end/)

When End to End was released I was curious if any proponents of "JS Crypto
Considered Harmful" would have a response, but I didn't see one, nor do I see
one here. I would be interested to see a response for why the Google approach
is flawed.

~~~
jerf
"They directly address some of the arguments against JS crypto:"

And in the very first 5 words, "End-To-End is a Chrome _extension_ " (emphasis
mine), they take themselves out of the domain of the original post, which is
90% about Javascript included in web pages, not Javascript in general.

~~~
haberman
From the page: "For the rest of this document, we're referring to browser
Javascript when we discuss Javascript cryptography."

Browser extensions are still part of the browser. They still have DOMs, for
example.

Maybe the article is trying to limit itself to talking about "Web page
JavaScript". But if so this isn't very clear.

------
phlo
While I agree with most of the article, in light of recent events (blanket
surveillance, Heartbleed), an argument could very well be made in favour of
using JS crypto -- not by itself, but in addition to TLS.

> The problem is, having established a secure channel with SSL, you no longer
> need Javascript cryptography; you have "real" cryptography.

Scenario 1: You run a note-storing service as described in the article. Your
site is delivered through TLS, using one of the non-DHE ciphersuites. You used
your CA's web interface to generate the certificate and private key, because
it's simpler than generating your own CSR. An NSL with an attached gag order
requires your CA to submit all generated RSA private keys to the NSA and shut
up about it.

Without JavaScript crypto, the NSA is able to passively decrypt each
connection and collect all of the notes' contents.

If you add JavaScript crypto (again, as described in the article), they need
to either mount an active attack (which might get discovered) or find a flaw
in the JS crypto (which is very much possible -- but adds work).

Scenario 2: You run any site with a login prompt. You are using OpenSSL.

Without JavaScript crypto: during a brief window of two years or so, attackers
may be able to exploit Heartbleed and read unencrypted passwords that were
submitted by users.

If you add JavaScript crypto, attackers may be able to exploit Heartbleed and
read HMAC-digests of recently submitted user passwords. Other sensitive data
might, of course, still be disclosed.

------
Joeboy
Just putting this heresy out there to see what happens:

I wouldn't want anybody betting their life or freedom on browser based crypto,
but if widely adopted and well implemented, maybe it is good enough to make
mass surveillance significantly harder? If _everybody_ 's crypto code is being
tampered with, somebody is likely to notice.

------
flavor8
Browser cryptography has its place - the simple fact is that PGP is too hard
for everyday users in its current guise (your parents are not going to go to
key exchange parties). And, requiring users to install binaries on their
computer (which can feasibly come with auto-update) to use encryption has just
as high a risk of being subverted by somebody with malicious intent who is
trying to target the communications of a particular individual. (Which is
harder: acting as MITM in a company's internet traffic, or breaking into the
target's Windows desktop?)

Mass market encryption is a worthy goal, and browsers are a reasonable
platform to achieve that goal. Educating users about not installing extensions
that can filter pages in their browsers, recommending incognito mode, etc are
good steps. And of course TLS should be considered a minimum requirement for
transport. Browser encryption is never going to be military grade, but it's a
step up over unencrypted communication.

~~~
tptacek
PGP being "too hard" doesn't mean that browser crypto "has its place". As I
said upthread: if there's a "#1 most common fallacious reasoning strategy" for
browser crypto experts, it's this one: "we really need browser crypto to work,
therefore it works". No.

~~~
personZ
_PGP being "too hard" doesn't mean that browser crypto "has its place"._

While I've always enormously respected your opinion, you seem to have a very
black/white perspective, aggressively attacking and/or belittling anything
that isn't an infallible solution, even if it's a significant improvement for
many or most scenarios.

You don't seem terribly pragmatic in many cases. I realized that long ago in a
discussion on passwords and the browser -
[https://news.ycombinator.com/item?id=2000833](https://news.ycombinator.com/item?id=2000833)

So many password incidents since would have been a complete non-incident with
such a solution (or _anything_ similar), so I've always remembered your raw
negativity because it wasn't a perfect solution: That because it didn't solve
every possible issue, it's better to solve no issues.

~~~
derefr
What's the point of a fallible solution? The only people who actually need
cryptography need infallible solutions. Everyone else can rely on security by
obscurity--rot13ing their text will be good enough for them.

~~~
personZ
_What 's the point of a fallible solution? The only people who actually need
cryptography need infallible solutions._

There is no such thing as an infallible solution. Start with that. When you're
talking about actors who are exploiting RNG weaknesses, broaden your horizons
a little.

PGP needs a key file. You need to (optionally) enter a passphrase. All someone
needs to do is steal your key file and circumvent your passphrase (either of
which there are _countless_ mechnanisms to achieve. They aren't trivial, but
if we're talking about organizations that are taking advantage of imperfect
RNG generators...) and boom, PGP has been rendered a false sense of security
over the history of your communications. I mean, if we're talking about rogue
actors taking over servers and injecting false script, such a situation is
just as viable.

Everything is on a gradient. Any simplification (such as "fallible versus
infallible") is just garbage time.

Again, and I realize Ptacek is a bit of a hero around here, his words above
question, but I go back to his response to that password thing, which was the
moment I understood the disconnect between big security talk, and actual
security. When the alternative is (and _continues to be_ ) nothing -- which is
exactly the case in the password discussion -- discarding options because they
don't cover every scenario is absurd. It is grossly destructive, just as it's
destructive to discredit PGP because it requires access to a keyfile.

~~~
derefr
I think the difference is between attacks that are currently well-known, and
are _automatable_ (attacks that are, effectively, on the cryptosystem
itself)--and attacks that boil down to social-engineering/rubber-hose
cryptanalysis (attacks that are, effectively, on you.)

Or, to quote cperciva's talk
([https://news.ycombinator.com/item?id=7883707](https://news.ycombinator.com/item?id=7883707)),
"the purpose of cryptography is to force the US government to torture you." If
a cryptosystem makes torturing you for the required information easier than
attacking the cryptosystem itself, the cryptosystem is "strong enough." Any
system for which this isn't true isn't doing its job.

------
datashovel
The thing that perplexes me about this whole debate is that some seem to think
there's no inherent trust between parties exchanging encrypted messages.

That seems to be the first important thing to establish here. Alice sends
public key to Bob, Bob encrypts the data using said public key and returns the
encrypted data to Alice. What guarantee does Alice have that Bob didn't turn
around and also post the plain text of Alice's request to [name your favorite
social network here]? By the same token, what guarantee does Bob have that
Alice won't turn around and post the plain text of the response on [name your
favorite social network here]?

So it seems that inherently, even in the most secure of communications, there
has to be trust between the parties exchanging the messages.

Please forgive me if this is already being discussed. The thread is rather
long at this point so admittedly I didn't read every part before posting this.

EDIT: for correctness, cleaned up references to bob / alice.

~~~
datashovel
So to frame the question in this light, the parties exchanging encrypted
messages via javascript (on top of SSL) trust that they're doing everything in
their power to maintain the integrity of their systems. Do they increase the
surface area where attacks could take place? Sure. But in what way is sending
encrypted messages on top of SSL any less secure than sending unencrypted
messages on top of SSL.

One argument seems to be that there's no need for the added layer on top of
SSL. Then I would ask, so why have passwords at all? After all it's guaranteed
that the communications are secure. It looks like the disconnect here is that
encrypted messages on top of SSL are behaving as a mechanism to help
authenticate the recipient of the messages, like a password, and not trying to
act in the same way that SSL behaves as a way to secure communications between
2 parties.

------
pdkl95
What about browser bugs?

I'd like to suggest that the software agent that has access to your private
keys should _NOT_ be in the same process as an agent that directly handles
data from the internet. The reasoning is simple: defense in depth, and the
principle of privilege. Really, this is no different than the idea of running
a web server in a chroot(2) jail.

A proper solution that _might_ be something that can be provided in some sort
of browser extension (*though it's likely to have platform-specific
requirements) is to simply call gpg as an external process. It shouldn't be
hard to wrap that up in an API provided by the extension.

Of course, it would be even nicer if the browser's provided that feature
directly, as they could utilize platform-specific features such as a secure-
password-entry UI (e.g. pinentry)

~~~
mike-cardwell
I agree. It should talk directly to gpg-agent. That way, it would
automatically have support for things like the Crypto Stick and smartcard
readers, and wouldn't necessarily have direct access to the keys.

------
lukasm
>WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT CRYPTO CODE? You can. It's
harder than it sounds, but you safely transmit Javascript crypto to a browser
using SSL. The problem is, having established a secure channel with SSL, you
no longer need Javascript cryptography; you have "real" cryptography.
Meanwhile, the Javascript crypto code is still imperiled by other browser
problems.

NSA has got the keys.

I'm an activist in a regime and extra level of indirection may slow them down.

------
peterwwillis
This doesn't need a long post to explain. There is a single principle which
makes all browser-based JavaScript crypto completely insecure forever:

Never trust any input, nor run any code, delivered by a remote host, unless
its original author's digital signature has been verified by multiple distinct
trusted mirrors, and as long as there is no way to change this input or code
without repeating the process.

------
hawkharris
What about using the Stanford JS Crypto library through a Google Chrome plugin
(in addition to SSL) for an added layer of security?

Edit: Reading on, I see that the author addressed SJCL and browser plugins
separately, but I still think that most of the problems raised in the article
could by solved by using them in conjunction. That gives you a secure channel
for delivering peer-reviewed encryption algorithms.

------
DCKing
All attacks against JS crypto addressed assume a somewhat determined and
active attacker. Although this is definitely the big problem to be concerned
about, that does not mean that there is _no_ place for JavaScript cryptography
as protection against casual passive eavesdroppers. The problem is that users
are quick to get a false sense of security because of this, not that
JavaScript crypto is utterly without use.

Having said that, the state of web cryptography is abysmal. There is work on a
Web Cryptography API [1], but so far browsers only implement the part that
concerns the generation of cryptographically secure random values. Even then,
an attacker can use injected JavaScript to override the window.crypto
functions, so they would need to be protected somehow.

The lack of secure crypto prevents a whole class of applications from coming
to JavaScript. Luckily my bank offers native apps, so I can just recommend my
parents to use that instead of their website.

[1]: [http://www.w3.org/TR/2014/WD-WebCryptoAPI-20140325/#scope-
al...](http://www.w3.org/TR/2014/WD-WebCryptoAPI-20140325/#scope-algorithms)

~~~
kofalt
> assume a somewhat determined attacker

Most CVEs that come to mind assume a somewhat determined attacker.

It takes a reasonably determined attacker to commit to rails without
permission [1] or run a ten-line perl script to crash a server [2] too.

Waving away a problem via "the bad guys would need to think for more than one
second" is not exactly reassuring.

[1]:
[https://github.com/rails/rails/commit/b83965785db1eec019edf1...](https://github.com/rails/rails/commit/b83965785db1eec019edf1fc272b1aa393e6dc57)

[2]:
[http://www.ocert.org/advisories/ocert-2011-003.html](http://www.ocert.org/advisories/ocert-2011-003.html)

~~~
DCKing
JavaScript crypto is simply not a protection against that attack model. It
_is_ protection against the attack model of the passive eavesdropper. That's
all I'm saying. I completely agree with what you say.

Whether you view that attack model as something worth considering depends
entirely on context. But it's a valid view for many applications. As long as
people don't use it with any expectations of security under active attack
models, I'd say that's okay.

~~~
kofalt
> protection against the attack model of the passive eavesdropper

My argument would be that trying to protect against passive attackers with JS
adds nothing beyond what SSL already offers.

Which is already required as a matter of course, and already compromises the
payload if SSL is broken (again).

~~~
flavor8
That's not true. Client side encryption allows the server to never have to
touch user-generated content, which makes defense against subpoenas (and
certainly blanket wiretapping) more feasible. It doesn't defend against the
NSA who want to target a particular individual, but it's better than storing
all content encrypted with the same known key, or storing it unencrypted.

~~~
kofalt
Which circles back to the "Javascript is hostile to cryptography" point the
article makes; I welcome any expert-audited JS libraries that can accomplish
secure file encryption, for example. But even assuming this blocker is
overcome, any illusions are shattered by the F5 key.

As pointed out elsewhere in the thread, there are few attacks that allow you
to listen in on an SSL connection's content without also allowing you to
modify that content - say, with a version that pastebins your keys.

Hence my argument that JS cannot provide anything SSL lacks, plus or minus
some wishful thinking. Combine this with the fact that it's impossible to
protect against a MITM-modified JS payload (see the "chicken-egg problem"
portion), and you have a rather uphill battle here.

------
unethical_ban
I disagree with an assertion. There is still a reason to use <client side web-
based cryptography> if TLS is enabled: The first scenario where users want to
store something encrypted on a server without the server having the keys.

Now, I'm not defending JS as a method of doing this, but there is a use case.

------
Igglyboo
I was under the impression that client side crypto was always a bad idea, is
this really news?

~~~
danielweber
End-to-end encryption basically requires crypto to be done on the client side.
But there are different kinds of "client side."

The JavaScript sandbox is basically impossible to lock down. The plug-in that
Google is developing is a much different beast.

------
chris_mahan
The problem with cryptography is that if a human is supposed to remember the
key, the key is probably weak and could be brute-forced, and if a computer
remembers the key, the computer could be compromised and the key stolen.

To me, that's the biggest issue.

------
lucideer
I would echo a lot of the comments here criticising the article title - it is
(by its own admission) referring primarily to the environment in which
Javascript is usually run (browsers), with very little about the language
itself.

So the use of the word "Javascript" in the title is very deliberately
misleading.

As for the rest of the title,
[http://meyerweb.com/eric/comment/chech.html](http://meyerweb.com/eric/comment/chech.html)

Title aside, the article is informative content-wise, but I'd personally be
much more interested in an actual honest review of cryptography in Javascript
the language - one that, for example, didn't choose to exclude NodeJS (or
similar server-side tech). Can JS on the server do cryptography right? If not,
why not?

------
mazsa
This topic deserves a separate entry but cf.
[https://news.ycombinator.com/item?id=7900597](https://news.ycombinator.com/item?id=7900597)

------
tlrobinson
Ok, now what? There is _clearly_ strong demand for cryptography in client side
web applications. Telling people "don't do that" isn't good enough, they're
going to do it anyway. How can we make it better?

One idea: a way to bundle, hash, and optionally sign a set of HTML/CSS/JS
resources (not unlike a Chrome extension). If the bundle is updated the user
can be prompted. If the user desires, they can check to see if trusted
individuals or groups have already reviewed the code, or review it themselves.
Perhaps the code is hosted on Github (or wherever) and people can comment on
questionable changes there.

~~~
wglb
A good point to start is this comment:
[https://news.ycombinator.com/item?id=1951556](https://news.ycombinator.com/item?id=1951556)

~~~
tlrobinson
These seem like pretty weak arguments to me, especially if you were able to
restrict execution to resources in the bundle (maybe that means no "eval" \--
fine)

Most languages permit similar amounts of runtime dynamism, they just don't
make it as easy as JavaScript. Actually, in some ways JavaScript is _better_
than other languages in this respect:

* Using closures to encapsulate "private" variables is pretty bulletproof, AFAIK.

* ES5 features like "Object.freeze" and "strict mode"

* Object-capability safe subsets of the language, e.x. Caja, SES, etc

* CommonJS-style module system

------
mynames
What about JS Cryptography _over_ SSL? Another security layer to protect
against those who can break SSL, or is it just useless?

~~~
jerf
Useless. If the SSL is broken, the attacker can see through to all the
messages the client is being sent, and decrypt whatever the client can
decrypt.

Here's the thing about why JS-in-webpage crypto is fundamentally useless. The
purpose of cryptography is to, somehow, secure some sort of communication
between two parties. Maybe it's as tiny as a zero-knowledge proof, maybe it's
a full-on encrypted general-purpose channel like SSL/TLS provides that can
send arbitrary data back and forth in real-time, maybe we're trying to send
small proofs of a message's integrity via hashes or signatures, maybe we're
sending messages from our past self to our future self and want it safe in the
meantime "at rest", but crypto is always about some sort of communication
between parties A and B (the traditional Alice and Bob, even in that last case
Bob is just an older Alice wearing a mustache).

Web pages run in a sandbox in which they fundamentally become part of the
server's environment briefly. They're just an extension of the server, and are
essentially designed from top to bottom to ensure that the only thing a web
page can access is what was sent by the same server. The only cookies a server
can get are the ones it set. (Modulo some cross-internal-domain stuff, but it
doesn't change my point here.) The only requests it can manipulate are the
ones it sent. The only resources the web page will access are ones the server
tells it to. (Yes, a server can direct you to resources on other servers, and
that's actually a big deal, a big delegation of trust, and one of the
trickiest corners of browser security.) The web browser forbids a page from
the server from accessing the hard drive on its own, or accessing any other
site's stuff, etc. Structurally, the web page is just an adjunct to the
server, by design, and anywhere this property fails is considered a security
hole and fixed as quickly as possible. Without any ability to access any local
resources that were not themselves originated from the server (i.e., HTML
local storage doesn't get you out of this, the server fully controls it), the
web page has no independent identity to assert. It is totally in thrall to the
server.

Note how I kept saying "web page", and not "web browser". There's a big
difference; web _browsers_ are allowed to do a lot of things a web _page_ is
not. Pages have a distinct execution context and security policy different
from the browser.

In the crypto sense, there's no communication between a "web server" and the
"web page"; it's all just one system. The web _browser_ may use SSL/TLS to
communicate the necessary information to form a web page, but once that is
done, the web _page_ is deliberately put into a context where it is just an
adjunct to a server, and there's no distinct two parties with which to
communicate anymore, from a crypto sense. This may seem counterintuitive,
because we see messages flowing back and forth, but that's all internal
chatter, the _browser_ functioning as an internal bus for the _page_ /server
unified security context. We have turned the full power of our cryptography
and security research into making web _browsers_ that enable web _pages_ to
function as extensions of the web _server_. It was and is not easy.

Further, what defines "server" is not the human word or concept, which can be
distracting. What defines the "server" is _whatever finally produced the bytes
that were used to create the web page_. If your _browser_ isn't using TLS/SSL,
that turns out to be pretty must "just whoever felt like serving some bytes to
you". (If you don't think that "intercepting web requests" is practical, it
is. It's off-the-shelf tech for hackers. Do not assume it is hard.) On the
other hand, if SSL is properly used (and let's skip over what that means and
the validity of cert authorities, etc), then you do have assurance that the
bytes came from your server without anybody in between, and the web _browser_
is providing assurance to the web _page_ that it is on an uninterrupted
channel.

When not using _browser_ crypto, it just doesn't matter how you spin around;
the attacker owns the web _page_ , and you can't do anything about it. And
when I say "own", I mean it, fully, the web page is actually functioning as an
adjunct to _their_ server, and you're stuck with the results. It doesn't
matter what crypto you think you're pushing to the user's browser, because
what's actually happening is that the attacker is pushing _their_ crypto to
the user, and its relationship to what you inteded to push is entirely and
solely up to their good graces, and by definition we're pretty much talking
about people without good graces. Without SSL, you have _ZERO_ control over
the user's webpage, and the attacker has all of it. Unsurprisingly, there's no
crypto system that can survive that restriction.

When using _browser_ crypto, SSL/TLS is providing you the maximum degree of
assurance that is possible already that the channel is secure, more or less to
the maximum extent the network will permit (i.e. attackers can observe byte
flows or who you connect to and there's not much that can be done about that).
The argument that in- _page_ crypto is useless amounts to the observation that
this binary situation admits of no "threading the needle", especially in light
of the fact that without SSL being used we pretty much get to assume that an
attacker can do absolutely _anything_ to the data between the user and your
server, and it's pretty hard to construct a crypto system that can stand up to
the attacker _arbitrarily manipulating it_ on the way to user, which is what
many people here are trying to do.

(Incidentally, clearly understanding the difference between the web _page_ and
the web _browser_ is also important for understanding why this is particularly
a problem for web pages and not so much other things. It's because web pages
go through so much effort to run in a sandbox such that the server doesn't get
any additional permissions it shouldn't have via sending malicious web pages.)

(There's a better blog post in here struggling to get out; this is my first
attempt to put this in words. The server probably needs a page/server
distinction too, for instance; the web page isn't running with the full
"server" privileges, of course, it's actually also running as a "page" sort of
thing too, where the browser and the server are collaborating to create a
single unified security context hosted within the two of them.)

~~~
mynames
Even if it's some kind of Diffie-Hellman JS crypto?

------
diafygi
I submitted a Show HN yesterday what I believe a potential legitimate use case
for javascript crypto (or more specifically, in-browser client-side crypto):

Bring Your Own Filesystem
([https://github.com/diafygi/byoFS](https://github.com/diafygi/byoFS))

Example chat demo:
[https://diafygi.github.io/byoFS/examples/chat/](https://diafygi.github.io/byoFS/examples/chat/)

It seems like an unhosted-style app (unhosted.org) can mitigate all of the
OP's concerns.

> Secure delivery of Javascript to browsers is a chicken-egg problem.

I address some this in the byoFS README[1] and again in an /r/crypto
discussion[2].

Since the webapp is unhosted, the webapp is built to work when being served
from anywhere, including your local filesystem. This means that you could
download the webapp anonymously, inspect/audit/checksum it, then run it from
your local filesystem in the browser (try it! just right click and Save
As...). Alternatively, you could load it from a server that you would trust to
kill itself rather than comply with a secret court-ordered compromise (e.g.
Lavabit, Internet Archive, etc.).

Additionally, since the webapp is just static files, all the webapp server
sees is anonymous requests (over https to prevent MITM). It doesn't know who
is requesting the static files, so it would be difficult to perform a targeted
malicious injection. You would have to broadcast the injection, which is
generally ill-advised since it might be spotted by a vigilant third party.
Most of the surveillance injection attacks that have been leaked have been
targeted, so this basically cuts off that attack vector.

So, in order to compromise this webapp through injection, you'd have to hack
into the trusted static server and blindly serve the injection to everyone who
requests the webapp (hopefully including your target). This is basically the
same attack vector you'd have to do if you were trying to inject something
into a download-and-install local application.

I don't think javascript crypto is universally a good idea, but for this
unhosted use case, it can limit attack vectors to the same as download-and-
install local applications.

> Browser Javascript is hostile to cryptography.

True, which is why WebCryptoAPI should be prioritized, and I can stop using
SJCL in byoFS. Once APIs for crypto primitives are baked into browsers, this
argument disappears.

> The "view-source" transparency of Javascript is illusory.

It's certainly much better than inspecting desktop apps. Also, if your webapp
is unhosted, you can certainly publish a signed hash of the static files,
which can be verified after you download the app and before you run it.

One infrastructure improvement that might be very helpful would be to be able
to buy an SSL certificate from a CA that is limited to a particular file hash,
which the browser checks before showing the connection as "validated" (maybe a
checkmark beside the https lock?).

[1] - [https://github.com/diafygi/byoFS#security-and-
philosophy](https://github.com/diafygi/byoFS#security-and-philosophy)

[2] -
[https://pay.reddit.com/r/crypto/comments/289w7x/bring_your_o...](https://pay.reddit.com/r/crypto/comments/289w7x/bring_your_own_chat_javascript_secure_chat_using/ci9cx7q?context=4)

------
id
same submission from 3 years ago:
[https://news.ycombinator.com/item?id=2935220](https://news.ycombinator.com/item?id=2935220)

------
tarekmoz
I've stopped reading at: "You could use SSL/TLS to solve this problem, but
that's expensive and complicated."

------
mantraxB
This article is all over the place.

First of all calling it "JavaScript cryptography" only to immediately correct
itself "oh, we mean browser client-side scripted cryptography, not JavaScript
in general, say Node.JS".

Then calling it "cryptography", when all the article talks about is hashing
passwords. There's more to cryptography than hashing passwords, and not all of
it is susceptible to the attacks described.

So the real title of this article is "Client-side scripted password hashing in
browsers less than ideal". I know, doesn't roll off the tongue as sweetly as
"JavaScript eats babies", but what you gonna do.

Third, the faulty logic of "this new technique has a bad edge case, so we
should completely reject it, despite any benefits".

The article just glosses over the situation where the script is served
securely and yet you hash at the client. "We already have a secure connection,
anyway" the author claims. Sure, but the server still gets the plain text
password, because TLS doesn't hash, it encrypts (and you can decrypt). If the
server is passing the login request to a third party to check the hash, turns
out it's still a good thing to hash at the client so the intermediating party
can't abuse its role, without the horrid UX of OAuth. Has the author thought
of that? Nope, just dismissed the potential outright.

Plain old-school TLS/SSL connections have exploitable edge-cases as well.
Should we "consider them harmful"?

Intelligent conversations about security require nuanced opinions that go into
when something is useful, and when it isn't, and let us make the call if it's
_harmful_ or _useful_ for a given project.

Calling something "harmful" outright and only listing the cons without the
pros isn't this kind of intelligent conversation. It's just counterproductive
scaremongering.

~~~
tptacek
This article has virtually nothing to do with password hashing, but the fact
that readers routinely take that away from it is part of why I don't like it
much either. We didn't promote it, or post it on HN; I think I posted it to
Twitter once, and now people re-find it once a year.

I'll absolutely win an argument with you (or, I think, anyone else) about
browser Javascript crypto. It's simply a bad idea. I just don't think this
particular article will.

We don't reject browser Javascript crypto "despite any benefits". We reject it
because those benefits are illusory. It's clear that browser crypto makes
people _feel better_ , but TSA airport security also makes the majority of
Americans _feel better_ too.

~~~
peterbraden
what about distributed as signed browser extensions?

~~~
danielweber
What's that Google is working on, and it's a much better environment. The code
lives on your computer all the time, and you (in theory, assuming the proper
browser settings) can make sure you are using a constant version of the code
that matches up with what other people are using and auditing.

You can't lock down JavaScript at all. Your browser should (in theory) tell
you when a plug-in is asking to be updated and give you the option to say
"nope."

In theory, you could even walk up to a brand-new (assuming uncompromised)
computer and reinstall the plugin. But you would still need some way of
knowing that you were installing the same version you decided to trust
earlier. Recognizing checksum pictures, I guess?

------
Already__Taken
Long story short, this topic won't end until XSS gets fixed. The people
writing HTML don't consider XSS their problem, its been what 20 years?

Nobody even knows what 'fixing' xss would look like.

~~~
jonrimmer
Um, the "people writing HTML" have done a lot of work to prevent XSS, by
introducing Content-Security-Policy and other HTTP headers:
[http://ibuildings.nl/blog/2013/03/4-http-security-headers-
yo...](http://ibuildings.nl/blog/2013/03/4-http-security-headers-you-should-
always-be-using)

The main problem is backwards compatibility, as older browsers don't support
them, but the idea that people have their head in the sand re. XSS is complete
nonsense.

