
Stanford Javascript Crypto Library - austengary
http://crypto.stanford.edu/sjcl/
======
meshko
Obligatory quote from [http://www.matasano.com/articles/javascript-
cryptography/](http://www.matasano.com/articles/javascript-cryptography/)

WHAT ABOUT THINGS LIKE SJCL, THE STANFORD CRYPTO LIBRARY? SJCL is great work,
but you can't use it securely in a browser for all the reasons we've given in
this document.

SJCL is also practically the only example of a trustworthy crypto library
written in Javascript, and it's extremely young.

The authors of SJCL themselves say, "Unfortunately, this is not as great as in
desktop applications because it is not feasible to completely protect against
code injection, malicious servers and side-channel attacks." That last example
is a killer: what they're really saying is, "we don't know enough about
Javascript runtimes to know whether we can securely host cryptography on
them". Again, that's painful-but-tolerable in a server-side application, where
you can always call out to native code as a workaround. It's death to a
browser.

~~~
gcommer
You may be interested in the defensive js
([http://www.defensivejs.com/](http://www.defensivejs.com/)) project which
seeks to securely isolate JavaScript code from maliscous javascript being
injected on the page. It also provides a verified crpyto library
implementation.

Combine this with HTTPS and I think doing the crypto on the client is
certainly feasible (and has been implemented by mega (well, they messed up a
few things, but that was an issue in their use of crypto, not the crypto
implementation) and 0bin).

EDIT: Also, the site you are quoting is explicitly complaining about people
using JavaScript security but NOT over HTTPs, and they claim that there is no
advantage to using JavaScript crypto when you are using TLS anyways. This is
wrong, because using both means that the web service we are using (for
example) never knows our plaintext password, so they can't attack us under the
assumption of password reuse, like in
[http://xkcd.com/792/](http://xkcd.com/792/)

~~~
anon1385
>This is wrong, because using both means that the web service we are using
(for example) never knows our plaintext password, so they can't attack us
under the assumption of password reuse, like in
[http://xkcd.com/792/](http://xkcd.com/792/)

TLS doesn't protect you against a malicious site that is collecting passwords.
Even if you were to examine the javascript code to verify that it isn't
sending the plaintext[1], they could send a different chunk of code any time
you access the site in the future. Either because the site is malicious -- as
in the above example -- or because it has been compromised (whether that be by
skiddies or three letter agencies with legal papers).

The only 'benefit' javascript crypto gives you is that it makes it easier for
people to develop apps where the users data is encrypted before it is sent to
the server (such that the server can never decrypt it). However doing this in
a javascript web app totally negates that since the server can just send a
compromised chunk of js any time it feels like. So the additional security to
the user is basically zero.

If you want to seriously create a service like this don't use javascript
inside the browser. Do what Tarsnap does: provide an open source native client
that does _not_ automatically update.

[1] and let's not pretend that modern javascript is at all readable, in the
age of minification and asm.js

~~~
3JPLW
You must always trust _something_. In the case of a SpiderOak-like service,
you must trust one of their client implementations to use it. Whether it is
their compiled binary (which isn't even open source[0] "yet") or a javascript
client in the browser delivered securely. Even in the case of tarsnap, with
open source implementations that don't self-update, you must trust your own
code review or somebody else's -- not a trivial task.

[0].
[https://spideroak.com/faq/questions/35/why_isnt_spideroak_op...](https://spideroak.com/faq/questions/35/why_isnt_spideroak_open_source_yet_when_will_it_be/)

~~~
gcommer
The point anon1385 is trying to make about JavaScript clients is that they can
be changed at any time by the web service provider, where as open source
clients can be 'verified' once by the open source community and then can't be
easily changed by the service provider (unless they have an auto updater,
which Tarsnap does not)

~~~
3JPLW
Ah, very good point. And the provider can send uniquely compromised versions
to individuals to reduce their chance of detection, as well.

~~~
marshray
Hushmail.

~~~
sneak
It's almost like a punchline these days, innit?

~~~
marshray
Converting a weak encryption scenario into a Hushmail scenario seems, in a
perverse way, an indication of progress.

------
jonpaul
Anyone interested in signed JavaScript?

Initiatives like this are great. However, I'm most interested in signed
JavaScript. I'm surprised that there isn't more of a discussion going about
this since JavaScript crypto is near worthless if it's served from an
untrusted server.

For example, let's say that you have an application that uses client-side
crypto in JavaScript. Then let's assume that the server (that serves up the
client-side app) is hacked and the client-side application is modified to send
your private keys back to the hacked server, there is currently no way you'd
know as the consumer of that client-side application. If signed JavaScript
existed, then browsers could alert you that the JavaScript that you're running
has been modified and doesn't match the signature, so it refuses to execute
it.

~~~
tghw
It occurred to me the other day that it should be pretty simple to write a
script that gets loaded first on the page and removes all subsequent scripts,
then loads them itself, checking the MD5/SHA1 of the script against a known
good value, stored in that script's attributes.

    
    
        <script src="scriptloader.js"></script>
        <script src="jquery.min.js" data-md5="a1b2..."></script>
    

Then you could decide to not load scripts that do not match the correct hash.
It could even ping the server to alert it to broken scripts.

~~~
sdevlin
How would this protect you against a malicious server?

~~~
tghw
If the server serving the HTML is pwnd, then it doesn't, but it doesn't really
matter then, either.

This protects against any external scripts being unexpectedly modified, e.g.
someone MITM your jquery source.

~~~
sneak
If you don't allow external scripts to be modified, why host them externally
at all? Why not just wget them and host them locally alongside the checksum
document and skip all this silliness?

Oh, also, those scripts can themselves load in other scripts you haven't
checksummed.

This is madness you're suggesting.

~~~
tghw
In what scenario do you want external scripts to be modified? Why not take
advantage of their ability to serve the scripts while also verifying that they
are the same scripts you expected to have? You can also verify that those
scripts do not load any other scripts in the version you have. Then, if it's
changed later to load more scripts, you'll know about it.

How is checking the validity of the scripts that run on your site madness??

~~~
rictic
It's... kinda madness. Just to be clear that we're talking about the same
thing, here's the proposed process as I understand it:

1) load your loader script, which has the URLs, fingerprints of the scripts
you want to run, and the necessary dependency information (jquery-ui must load
after jquery for example). In the best case, this file is being served out by
the same server that's hosting the HTML, that way at least you're not adding
_more_ attack vectors.

2) from the loader script, initiate ajax requests for each of the remote files
you need

3) as you get each one back, validate that its signature matches those that
are expected, raise an exception if it does not (ideally also displaying
something to the user), and evaluating it if its signature matches and we've
loaded all of its dependencies.

So, why is this madness?

1) Most of the time the reason that you're letting a third party host these
files is for speed. They've got a CDN, and hopefully the file will already be
cached by your user. Grabbing resources with javascript that you could load
directly in the html will slow down your page's loading time, as the browser's
html parser isn't able to look ahead and fetch resources that are likely to be
needed before the renderer has asked (HTML has a defined rendering order that
can be kinda strict sometimes, this is the same reason why you don't put your
<script> elements in the <head>).

2) Another reason for using a CDN for your JS libraries is convenience, which
this process also wipes out.

3) The whole thing won't work at all unless the third party server sends back
cooperative CORS headers, as you can't do an ajax request to a third party
site without their cooperation.

Finally though – and this is the big one – it's more convenient for the
developers, strictly safer, and faster for the end user if you just compile
all of the JS and serve from the same domain that's serving your HTML. As
stated above, if that server is compromised, you're toast anyways (barring a
browser extension or similar). If you really want some more security, look
into SSL (and actually look into it, there's definitely much better and much
worse ways of doing it).

~~~
tghw
Most of what you're describing as "madness" is already done in head.js and
require. They have no particular speed penalties and handle dependencies
better than just putting script tags in the right order. The one difference is
that a system like this would check the hashes to verify the code.

The one possible catch, as you mention, would be getting access to these
scripts before they are loaded without having cross-origin problems.

There are a number of problems with serving from your own domain. It is, in
fact, much less convenient for developers, as it adds an extra step to the
build process and requires the system to properly handle caching so that old
resources are not still served after a build. It is also slower to serve from
the same domain as there are connection limits. Lastly, it gives up all
advantages of a CDN.

My proposal is an attempt to continue taking advantage of CDNs and third party
resources, but without giving them the keys to your site. Did you ever
consider that Google has access to all of your users' cookies, if they wanted
to add a small modification to jQuery or Analytics? Considering recent
revelations about government involvement, is it really out of the question to
believe that they never would take that information?

~~~
rictic
head.js and require do have significant speed penalties unless you're just
using them as for tracking dependencies and for developing locally. It may be
the right tradeoff of effort vs performance for some projects to leave this
going even in production, but there's nothing to gain in denying the huge
performance boost you're leaving on the table by not compiling your js.

I'll try to extract the core of my argument. The hashing proposal is madness
for two reasons: 1) it's slower and less secure than just serving all of your
js in one file; 2) it will not actually work without the cooperation of a CDN.

1) The proposal requires you to have one trusted server that you're serving
javascript resources out of (because you need to load the script loader and
fingerprints from there). If you want fast and secure, you've already paid the
cost of a round trip to server #1, and the risk of trusting server #1. The
sane thing to do from a performance and security standpoint is to load all of
the javascript that you can in that request. Otherwise you're going to be
blocking on that request returning, then the renderer reaching that script's
location in the html, then that script being executed before it fires off the
requests.

2) I'll phrase this as a challenge. Try to load jquery from a CDN with an ajax
request. Remember, the key is to get the source of the script into memory
without executing it, so that you can hash and validate it first. Feel free to
try it right now in your developer console, I'll even give you a code snippet
to start from:

    
    
      url = '//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js'
      var request = new XMLHttpRequest();
      request.open('GET', url);
      request.send();

------
marcopolo
If the maintainer is reading this, I submitted a pull request that
significantly speeds up CCM encryption by using arraybuffers. I'd love to see
it merged in:
[https://github.com/bitwiseshiftleft/sjcl/pull/89](https://github.com/bitwiseshiftleft/sjcl/pull/89)

------
nly
The W3C have a draft API for hosted JavaScript cryptography:

[http://www.w3.org/TR/WebCryptoAPI/](http://www.w3.org/TR/WebCryptoAPI/)

Netflix, of all people, have implemented a flavour of this at least once for
Chrome:

[https://github.com/Netflix/NfWebCrypto](https://github.com/Netflix/NfWebCrypto)

------
j_m_b
With the recent revelations that private internet companies (ISPs) are
colluding with the NSA, I very much doubt the security of certificates issued
by a "certificate authority". I really like the idea of Secure Remote Password
(SRP) which uses a Diffie–Hellman-like key exchange instead of relying on
third party certificates. The main difficulties I see to SRP adoption are: 1)
Not all browsers natively support SRP, though this is changing. You therefore
need a good javascript library for interfacing with the server. This of course
leads to 2) No trustworthy javascript crypto library exists with the possible
exception of the incomplete SJCL. The biggest problem current JS libraries
have is that of 3) generating random numbers. Because there is no
cryptographically secure rand() javascript implementation, the solution I've
seen is to use mouse movement or other user input to generate random numbers.
The problem with this is that it takes ~30 seconds of random movement from the
user to "seed" the generator! One interesting method I've thought about is to
use [http://www.fourmilab.ch/hotbits/](http://www.fourmilab.ch/hotbits/) to
retrieve random numbers, but this just leads back to depending on a third
party for secure communications. I think an efficient, cryptographically
secure pseudo random number generator is the biggest deficit to js-based
crypto tools.

~~~
tptacek
I'm not sure how to engage with the idea that SRP is a viable replacement for
certificate authentication; it only works with the client and server have a
pre-shared key.

I very much do not trust certificate authorities, but observe that you don't
_have_ to trust certificate authorities to make the security architecture of
TLS work. Already, CA compromises have a minimized impact on properties like
Google Mail, whose certificates are pinned in Chrome and Firefox. Soon, all
properties will get the same privilege, when we adopt schemes like TACK that
allow dynamic certificate pinning.

As soon as a critical mass of browsers support dynamic pinning, it will become
drastically less profitable to target CAs, because attempts to present forged
certificates to Internet users en masse will quickly be detected.

~~~
j_m_b
> it only works with the client and server have a pre-shared key

What "pre-shared key" are you referring to in SRP? The only a priori value
needed for SRP is the safe prime (N) and generator (g).

~~~
tptacek
The password.

------
nsmartt
This could lead to a rise in services offering client-side encryption. A
'better' homepage might help.

~~~
lisper
Anyone who needs a "better" home page probably ought not to be using it.

~~~
nsmartt
Developers with a budding interest in cryptography should certainly be using
this, even if they shouldn't be asking users to trust their work. Your
mentality excludes a rather large set of users who may not realize what
they've found.

Also, a homepage with a pitch, a visually appealing design, etc would likely
generate some buzz from people who aren't interested in using it but are
interested in the concept. This could very well lead to discovery by
developers who would otherwise have never found it.

------
methehack
What do you guys think about a service (think pingdom) that you set up to
periodically request a js file from your server and check it against a known
good checksum?

EDIT: to spell this out, you would, self host the stanford library, for
example, and have this service verify it against a known good checksum.

~~~
tptacek
You don't just need to authenticate sjcl.js (or whatever). You need to
authenticate every page element that can influence the JS, because JS is
malleable. The service you propose won't work.

~~~
methehack
Well, couldn't checksum the whole page?

I know that creates a big pain in the ass in terms of modifying the page and
in terms of making the page dynamic, but bracketing those two concerns -- why
wouldn't that work?

~~~
sneak
You can checksum the whole page, but any externally loaded JS can monkeypatch
any other part. Use analytics? How about a payment widget? All of these can
affect every part of the js environment, overwriting anything from jQuery to
sjcl. Alternately, they could leave the crypto alone and just hook into
keystroke handlers or the DOM and steal your plaintext that way.

Also, some browsers will run JS from urls referenced in img tags as long as
they are served with a text/javascript MIME type.

It's far too big an attack surface.

------
franze
i built Masel with it (as a demo for our local javascript meetup) i.e.:
[http://replycam.com/m/](http://replycam.com/m/) pretty much serverless
encrypted message sharing. its MIT L. and unfinished
[https://github.com/franzenzenhofer/masel](https://github.com/franzenzenhofer/masel)

~~~
RazerM
Off-topic, but 'masel' is the Scots word for 'myself'.

~~~
franze
i named it after the jidish
[http://en.wikipedia.org/wiki/Mazel_tov](http://en.wikipedia.org/wiki/Mazel_tov)
(written in german "masel tov") masel stands for "a drop from above". so
basically "masel" as "drop" of privacy. but well i coded it during a train
ride while i was drinkng some beer (reached the balmer peak with it
[http://xkcd.com/323/](http://xkcd.com/323/) )

------
yawgmoth
This is a great library; I used it to build a Diffie-Hellman key exchange and
symmetric encryption in an ASP.NET MVC4 app. The result is horribly insecure,
but there is a situation where our product is installed under Http rather than
Https that I solved.

If anyone is curious, it works just fine with BouncyCastle and
RFC2989DeriveBytes (for PBKDF2).

------
brador
Universities should do more open source projects. How about a university that
funds open source works exclusively?

~~~
dubcanada
I'm sorry but I have to ask, is this sarcasm? I can't really tell.

~~~
ape4
In case of lack of sarcasm. Check out
[http://en.wikipedia.org/wiki/Berkeley_Software_Distribution](http://en.wikipedia.org/wiki/Berkeley_Software_Distribution)
The core of iOS

------
goldfeld
Relatedly, I'm looking for Javascript checksum implementations, especially
sum24 or any other good 24bit hashing algorithm, and I couldn't find anything.
Does anyone know of canonical hashing implementations for CRC and checksum
(not crypto-level, I need short, short hashes) for JS?

~~~
methehack
I was thinking about this too -- but if they compromised your encryption JS
couldn't they also compromise your checksum JS?

------
leke
I was playing with a javascript crypto library and grease monkey. Here's a
quick and early facebook demo:
[https://www.youtube.com/watch?v=3HlQJWXlknE](https://www.youtube.com/watch?v=3HlQJWXlknE)

------
devx
Great. Now can we see more services implement easy to use client-side
encryption before uploading data to their servers?

~~~
tptacek
Great. Now we can pretend we're secure instead of grappling with the problem
that we're not. :)

Observe that SJCL's own authors warn about this problem.

------
sneak
Let's all congratulate austengary on successfully trolling HN's star
quarterback.

------
justapor
It's JavaScript, not Javascript..

