

Challenge HN: Break my in-browser crypto for $1000 - absherwin

After reading arguments about using cryptography in the browser, I wanted to see how easy it was to fail in practice so I built a simple app: notecrypt.appspot.com. It encrypts notes using SJCL and stores their encrypted form on the server and enables retrieval with the same credentials.<p>This approach is uniformly as good as or better than a browser app not using in-browser crypto from a security perspective. As daeken noted in a discussion a few days ago, it provides protection against a demand to produce the notes. He also claims that "For all other cases, there are a million simple means by which you can break it." So I challenge anyone on HN: Show me such simple means.<p>The instructions for claiming a reward are contained in a note stored in the app's database. To help you along, I'll navigate to any URL you send me while logged into the site. If you'd like to try your hand at finding a vulnerability in the AES implementation used, I'll send you a snapshot of the database. If you want to try injecting code, I'm happy to use a network you control if we can arrange the logistics though I won't promise to log in if you can't provide a valid SSL certificate :). If you need assistance to demonstrate a reasonable attack, ask. Post your attacks below or email my user name @gmail.com. The first successful attack wins. Usual disclaimers apply.<p>You might argue that this doesn't prove anything about JavaScript cryptography since additional security is used. That's true. If this survives, it doesn't provide evidence that the in-browser cryptography is sufficient; it only suggests that it can defeat some additional threats against an otherwise secure application.
======
moxie
You posted the link like this:

"...I wanted to see how easy it was to fail in practice so I built a simple
app: notecrypt.appspot.com"

If you copy and paste that into your browser, your browser will make an HTTP
request. Your webapp will send back a 302 redirect with an HTTPS link, but if
I'm an attacker running sslstrip
(<http://www.thoughtcrime.org/software/sslstrip>), that won't work.

What's more, you don't set HSTS headers:

$ curl -i <https://notecrypt.appspot.com/> HTTP/1.1 200 OK ETag: "ibqapA"
Date: Wed, 29 May 2013 18:59:37 GMT Expires: Wed, 29 May 2013 19:09:37 GMT
Cache-Control: public, max-age=600 Content-Type: text/html Server: Google
Frontend Transfer-Encoding: chunked

...so anyone that types "notecrypt.appspot.com" into their browser in the
future will continually expose themselves to the same vulnerability.

At that point I can just modify the JS that gets transmitted back to the
browser such that it doesn't actually do any encryption.

As always webapp-based JS cryptography is reducible to the strength of SSL. If
I can break SSL, I can break your JS crypto, so there's really no point in
doing the JS-based crypto.

How do I collect the $1000?

~~~
absherwin
The instructions are in a note.

If I understand your attack, you'd need to set up a clone website that resends
the password in the clear, have a malicious network redirect me to it and get
me to enter my password. I'm happy to enable to you to demonstrate that it
works. The hardest part is presenting a valid SSL certificate. As I warned in
the initial post, "I won't promise to log in if you can't provide a valid SSL
certificate."

Email me and we can arrange the details.

~~~
moxie
No need to setup a clone website or to obtain a valid SSL certificate.

If I'm in a position to observe a notecrypt users's communication (the reason
encryption is necessary), I can just run sslstrip (no SSL certificate
required).

It will transparently intercept all of the notecrypt users's plaintext
communication to notecrypt without generating any browser warnings.

At that point I also have the opportunity to modify any of the plaintext
traffic as it passes by.

The user will be communicating with your actual website (no need to setup a
clone), but I can just modify the JS as it is transmitted to the user so that
it doesn't actually encrypt anything.

This attack just requires running a single command, no complex setup required.
You might want to read more about sslstrip to understand how this works.

~~~
rarrrrrr
FWIW, my reading of the OP is that it's stipulated that the user will check
for the presence of a valid SSL session (i.e. with in browser visual
indicators) before login.

BTW, have been following your work on TACK with great interest.

~~~
j0ev
That's a bad assumption. I could feasibly purchase the appsp0t.com domain,
grab an ssl cert for nodecrypt.appsp0t.com, hop onto the LAN and run sslstrip,
redirect the user to <https://nodecrypt.appsp0t.com> and wall-a, green address
bar with a close looking address. That would probably fool me.

The "green" indicator is nice but definitely should not be _relied upon_ to
protect the user.

note: i used appsp0t as an example, no idea if its really available to be
bought.

Edit: it's not letting me reply to the below comment (probably because this is
a new account), but afair most browsers have fixed the IDN problem by checking
for "suspicious" characters (characters that look similar to roman glyphs) and
forcing the URL to be rendered in the full punycode URL.

~~~
ryalfalpha
I think it's also worth pointing out in Moxie's sslstrip talk he does go into
detail on using IDN
<http://en.wikipedia.org/wiki/Internationalized_domain_name> to spoof
something similar (for non-english TLDs).

Not sure how valid that still is (as the talk is a couple of years old? I only
watched it today), but it has to be assumed that a portion of users are going
to fall for even a badly mimicked url.

Gotta say the IDN stuff is impressive in how generalised it could be.
Terrifying. I'm convinced he's owed the $1,000.

~~~
Dylan16807
And what is the method of bypassing the similar-glyph detection and the very
strict language whitelists for different TLDs?

~~~
ryalfalpha
I said 'Not sure how valid that still is' for a reason :). I just found it
extremely interesting and 'out of the box'. After reading up a bit on modern
defences, I think you're right that it's irrelevant nowadays. (Unless you're
in legacy hell, doubtful for the target user demographic)

But the parent comment is still valid, there's nothing to stop Moxie
registering another domain even note-crypt.com or notecrypt.org or anything
like that and the average user will be complacent with that. (same applies for
appspot, note-crypt.appspot.com vs notecrypt.appspot.com vs
notecrypto.appspot.com)

It only takes a single lapse in checking the domain and they've lost their
login details and encryption key to the attacker.

Point is, the JS crypto does not add anything to the situation. All the
security is provided by SSL, and once it goes, the JS doesn't help. It just
gives the users a fake sense of an additional layer of security, which is
dangerous.

Moxie broke the current SSL usage, and therefore, he broke the JS crypto (as
he controls the communication channel). He beat the current state of the
challenge.

~~~
Dylan16807
To me it seems far too pedantic to give an award for pointing out, based on a
forum post that _blocks links_ (ask HN posts), that if you incorrectly use the
partial url you are at risk.

~~~
ryalfalpha
That's a valid point, but I can't see it being an impossibility that the user
will never accidentally stumble sending a http request rather than https.
Whether that is user input, or a maliciously placed link.

My understanding is that the HTST header would make this attack less useful.
But it's still a concern if you used Private Browsing/Incognito. The initial
request will still hit a 301 (vulnerable to interception by MITM). I've just
verified this with Facebook.com on my machine (Chrome 26, OSX).

I think it's quite fair to expect a user using this kind of site is likely to
use Incognito.

I'm actually kind of surprised as I thought Chrome had a standard list of
sites that use https only such as Facebook. (Woah.. seems the preload list is
TINY <http://www.chromium.org/sts> )

~~~
Dylan16807
Some day maybe we'll see browser-enforced secure DNS that has the ability to
include certificates or set HTST. Maybe the same day ipv6 finally takes over
in a few centuries.

I like the kind of pinning and preloading that chrome does but it's such a
tiny gesture compared to the size of the internet, and nobody else seems to be
trying to deploy better security.

~~~
ryalfalpha
Someday ;)

Perhaps there could be open whitelists where sites could nominate their sites
as 'https only'. Wouldn't even need to be built into the browsers, could just
be a thing people do when they launch a clean browser install, hit up
<https://blahsitelist.com> and click a button that fires off https requests to
all of those sites which would cache the HTST header? (I've only stumbled on
HTST headers today, so I may be overly flamboyant as to their usefulness)

Although, come to think of it, isn't that just basically what the
HttpsEverywhere extension does?

------
tptacek
<http://www.schneier.com/crypto-gram-9902.html#snakeoil>

See "Warning Sign #9".

~~~
absherwin
This isn't a product launch. It's a demonstration to focus discussion.

~~~
tptacek
I like that! From now on, when you want to introduce some new crypto idea,
just make sure not to call it a "product", then issue a $1000 contest to
assure people the idea works. Why doesn't everyone do that?

Oh, wait, I think Schneier answered that:

 _Contests are a terrible way to demonstrate security. A
product/system/protocol/algorithm that has survived a contest unbroken is not
obviously more trustworthy than one that has not been the subject of a
contest. The best products/systems/protocols/algorithms available today have
not been the subjects of any contests, and probably never will be. Contests
generally don't produce useful data. There are three basic reasons why this is
so._

They are:

1\. The contests are generally unfair.

2\. The analysis is not controlled.

3\. Contest prizes are rarely good incentives.

I'd submit that (1) doesn't count here, because the idea you're demonstrating
is so obviously flawed that contestants aren't at any disadvantage. But (2)
and (3) are absolutely valid here: there's no structure to the contest (it's a
bunch of Hacker News people poking at a page at random with no collaboration,
milestones, or test plans), and $1000 buys ~3 hours of cryptanalysis work if
you source it from software security people instead of actual cryptographers
(who bill north of $450/hr).

I have no idea who you are and so I don't want to sound like I'm offended by
what you've posted. But you are like the 100th person to staple SJCL onto a
web app and posit that they've created something more secure than a private
wiki. Actual professional cryptographers have addressed similar claims in the
past. Here's Nate Lawson:

[http://rdist.root.org/2010/11/29/final-post-on-javascript-
cr...](http://rdist.root.org/2010/11/29/final-post-on-javascript-crypto/)

Instead of the brinksmanship of offering a contest, why don't you instead just
listen to the arguments people are making and try to learn from them?

 _Triple bonus points for noting that AES and SHA3 were the products of design
contests, after Schneier wrote this, and then observing the differences
between those design contests and the one at the top of this thread._

~~~
absherwin
I agree it only proves security up to a certain value. Would you be happier if
I increased the reward?

Regardless, what do you think of someone who publicly calls something a
terribly insecure idea, and then reacts with snark and doesn't actually
attempt to crack the system when the opportunity is put before him by someone
willing to bend over backwards to help him crack it?

Propose a plan to actually crack the system. Do you want the database? Do you
want to control the network where I log in?

~~~
tptacek
Imagine if people built bridges the way you propose building secure software.
"What do you think of someone who publicly calls a new bridge design unsafe,
and then reacts with snark and doesn't actually attempt to destroy the bridge
when the opportunity is put before her by someone willing to bend over
backwards to help her do it?"

Engineering doesn't work that way. Your proposed solution doesn't become more
sound simply because you feel aggrieved at the way people react to it.

Also: it's deeply dishonest to suggest that the only reaction you've received
to this design is "snark". As I pointed out above, with a link and everything,
and as you yourself acknowledged in your original post, you've been given a
litany of reasons why your proposed design is flawed. _You just don't seem to
like hearing them._

~~~
absherwin
I'd expect them to propose a scenario that could be tested either with
software or a miniature. You're under no obligation to but examples are
persuasive. Feynman's ice water demonstration convinced far more people than
his well-reasoned appendix.

I'm attempting to give those who believe that JS crypto should never be used a
way to make a clear public demonstration. What better target could you ask
for? A web application hacked together in a few hours by someone with no
training in computer security who isn't even a professional software engineer
and who is willing to arrange scenarios favorable to the attacker.

I'm not aggrieved by the negative reaction. I have no skin in this game. I
don't earn anything if people walk away believing this idea is more secure. I
just wish you wouldn't keep repeating the same canard about having to
bootstrap the crypto on every use while attacking the messenger and the manner
in which the message is being delivered.

------
AnIrishDuck
This is bad even as contests go.

1) You don't provide a concise summary of what threat models you are
considering. So in that case, my attack is: I break into your server and
silently swap your code with code that ships me your user's password.

2) You don't even give a human readable listing of your site's source code.
Reverse engineering is tedious work that's entirely different from
cryptography or security analysis. You can be sure that any motivated
attackers will have skilled reverse engineers deconstruct your system and then
hand off their results to skilled security analysts/cryptographers.

But that sure as hell will cost more than the paltry $1000 you're offering
here.

~~~
absherwin
1\. I proposed a few examples: Crypto flaws that you could exploit by giving
you the ciphertext (only 1 person has asked), cross-site request
vulnerabilities, and attacks from a malicious network. I also invited you to
propose your own attacks.

2\. You have the client side code. I'm happy to provide the remaining code if
anyone is actually curious.

~~~
AnIrishDuck
1\. I don't see how any of these threat models can't be addressed by doing
server-side crypto. Server-side crypto libraries are more mature, more
reviewed, and thus by default better from a security standpoint.

2\. The client side code is obfuscated. The server-side code is not provided.
To perform a reasonable analysis of a system's security, a reviewer needs
unobfuscated access to both. Why would you try and make security and crypto
researchers de-obfuscate or guess your code, when that is not their
specialization?

~~~
absherwin
1\. As discussed in another comment, this requires a similar level of trust to
encrypting the data on the server. The only difference here is that, to the
best of my knowledge, an attacker who took control of the server, could
silently log the plaintexts in the server side case while some indication
would be provided in the client-side case. Most wouldn't notice it but some
would eventually.

2\. The code I wrote isn't obfuscated. The code at the top is just SJCL and
angular.js which are <https://github.com/bitwiseshiftleft/sjcl> and
<https://github.com/angular/angular.js>

~~~
AnIrishDuck
1\. A server could silently insert code that replaces your keys with a well
known one. Don't kid yourself into thinking anyone would notice this if it was
silently tucked away in the minified angular or SJCL code.

2\. Your definition of obfuscated and mine clearly differ. I'll be more
direct: where's the (well-commented, clearly coded) model/controller code?

~~~
absherwin
1\. Agreed. That's an avenue by which a vulnerability in the server or
communication channel could be exploited.

2\. It doesn't exist. The code was written in the same form it was uploaded.

------
aidenn0
Here's the issue I have with in-browser JS crypto: let's say I compromise your
server, I can now send code that uploads all of the user's keys to me, without
needing to compromise their individual machines.

There are similar issues with auto-update mechanisms; if someone owns the
chrome auto-update system then they can run arbitrary code on everybody's
machine. That's higher-stakes, but also more secure than the average webserver
(which if javascript crypto were to become ubiquitous, would become an issue)

~~~
absherwin
What's an alternative implementation that encrypts the data at rest and is
more resilient to a compromised server?

~~~
tptacek
A downloadable client that doesn't bootstrap its own crypto every time it runs
from your server.

Can I have my $1000 now? Actually: I'd prefer if you just sent it to Partners
in Health; they're great.

~~~
absherwin
We can argue about the merits of downloadable solutions but that isn't a flaw
in this application. I'm willing to make it easy for you: You can even have
the database to let you crack it without first having to break into my server.

I agree with you that there are risks to browser side crypto but how are they
any different from upgrading software without reevaluating the security of
each upgrade? Browser-based crypto also has benefits; most users prefer using
web applications to installed all else equal because it's easier.

~~~
tptacek
Because normal users don't upgrade their applications every time they run
them? I feel like people have to entertain this argument every time Javascript
crypto comes up, and that it's a self-evidently silly argument. Would you feel
as comfortable running SSH as a Java applet delivered from a website every
time you needed it as you do running /usr/bin/ssh? _Of course you would not_.

~~~
AnIrishDuck
While I understand your point and agree mostly with what you're saying, it's
worth noting that a functionally equivalent thing happens with /usr/bin/ssh on
most linux systems.

A remote web site is consulted (usually via plain http, not even https!) and
new code is downloaded. Signatures are checked locally (using gpg, which is
itself bootstrapped from trusted installation media, and can be updated at any
time like any other package), and then new software is installed.

The update cycle is longer, but it's still there. Under the threat model of
"your source of software is compromised", no system is safe.

~~~
CodeMage
The software source for stuff like /usr/bin/ssh is a thing that has some
serious infrastructure, well-established procedures and well-organized people.
Right from the start the trustworthiness of that source is higher than
average, arbitrary site.

I say "arbitrary", because I assume you would want to serve this JavaScript
solution from your own site, the way people do with jQuery.

On the other hand, if you do decide to serve it from one central place, then
you could establish similar level of trustworthiness for that source, but your
solution would still be at a disadvantage. Running JavaScript is like
downloading new software to your home directory on a regular basis and
executing it; you don't need to compromise the software source for
/usr/bin/ssh, you just need to compromise a less trustworthy bit of software
and then make that bit modify the in-memory behavior of /usr/bin/ssh (because
JavaScript doesn't have the kind of security the OS gives processes).

~~~
AnIrishDuck
> The software source for stuff like /usr/bin/ssh is a thing that has some
> serious infrastructure, well-established procedures and well-organized
> people. Right from the start the trustworthiness of that source is higher
> than average, arbitrary site.

I agree. That doesn't change my point: JavaScript or binary packages, if
you're running crypto you need to be able to trust the source of your
software.

> Running JavaScript is like downloading new software to your home directory
> on a regular basis and executing it; you don't need to compromise the
> software source for /usr/bin/ssh, you just need to compromise a less
> trustworthy bit of software and then make that bit modify the in-memory
> behavior of /usr/bin/ssh (because JavaScript doesn't have the kind of
> security the OS gives processes).

This analogy is strained. It only really applies if your site is vulnerable to
XSS attacks. After that, JavaScript is generally far more restricted than your
average userland code. JavaScript from one site can't modify javascript from
another (well-constructed) site. Whereas you correctly point out that once you
can execute arbitrary code with user level privileges, the game is usually up.

In particular, x session keylogging means you can grab a sudo password from a
user once you can spy on their session, which any userland program can do.
JavaScript from attack.com can't monitor keypresses on trustedbank.com unless
it's explicitly included (which can happen by accident via XSS).

Look up the same origin policy to see the kind of constraints that are usually
placed on JS code.

------
MichaelGG
What does this provide over just using SSL and deriving a key from the user's
password and doing crypto on the server? Nothing.

Anyone that compromises the server can compromise the client and have it send
the keys back.

Your test is just pointing that SSL is useful; all the security hinges on SSL.

~~~
absherwin
Having a server compromised is bad. Which is worse, the ability to read
plaintext on the server silently or forcing a code change to be distributed to
every user, however subtle?

That said, the key for security is to design an app where compromising the
server isn't possible.

~~~
rosser
_That said, the key for security is to design an app where compromising the
server isn't possible._

That's simply not possible. There's always an exploit somewhere. Continually
kicking the can of security "upstream" and being confident that your stuff is
secure so long as no-one else messes up is naïve and myopic.

~~~
absherwin
In that case is it possible to design a secure application? Can you design an
application used by real world people that will remain secure even if the
infrastructure that controls it is taken over?

If a server is taken over, the best one can hope for, is sufficiently layered
defenses to limit the immediate gains of the attacker and an alarm to warn
users not to trust the application until further notice.

------
napoleond
The problem with in-browser crypto is way bigger than the feasibility of
properly implementing crypto in the browser--it's the fact that you can never
ensure the security of the network. As such, I think the prevailing argument
against in-browser crypto is that it provides a false sense of security.

EDIT: Nevertheless, this seems like a neat project! I'm not anything
resembling a pentester though, so I'll leave the challenge to the experts :)

~~~
absherwin
Network security is a problem but breaking https doesn't seem trivial in the
wild. That's why I'm inviting people to try exploits on a real app and willing
to use a malicious network to let them break it.

~~~
e1ven
So let's talk this through.

I think the idea behind crypto in the browser is cool, but it seems like
anytime you're requesting the JS from the server, you need to trust the server
-

Instead of serving crypto.js, you could server plaintext.js, and I wouldn't
ever notice the difference, would I?

So if we agree that I need to trust you, to serve the JS crypto properly, then
what's the difference between doing that and trusting you to encrypt the text
for me?

~~~
absherwin
Yes, you need to trust me. I'd argue it's easier to trust me if the crypto is
happening on the client since if I change it, I could be caught. The
probability of being caught in any one instance is very low but over many
instances is high.

Would you make the same argument for native application? Unless you read the
source code every time you upgrade, you're exposed to the same risk. cperciva
did an inadvertent real world experiment on this by making an error during a
refactoring of Tarsnap and it took 19 months for a subtle crypto bug to be
noticed.

~~~
e1ven
If you changed it globally, you might be noticed next time someone did an
audit, which I suspect would be rare.

If you added a backdoor on the server that said "If userid == e1ven{crypto.js
= plaintext.js}" it would be much harder to detect.

Hushmail did something similar with a modified version of it's Java applet,
back a few years ago-

<http://www.wired.com/threatlevel/2007/11/hushmail-to-war/>

My point isn't that you should never trust webapps - It's that doing the
processing in JS, or on the server, doesn't CHANGE the threat-model. In both
scenerios, I need to trust you an identical amount, so there's no security
advantage do doing the crypto on the client side.

~~~
pyre
Well, it's _slightly_ different. Every person might not check every single
time, but it would be possible to do something like write a browser plugin (or
setup some sort of automated system) to check what was going on since the
execution is happening client-side.

I made sure to emphasize 'slightly' because this could just trigger an arms
race to getting around detection. You might be safe if you kept your detection
methods a secret (and only personally used them). On the other hand, if the
detection methods were used/deployed on a wide scale, then anyone trying to
compromise you would just work around the detection.

~~~
tptacek
What would that browser plugin check for? How would you write it?

~~~
pyre
\- Check signature of the incoming JavaScript against known good versions.

\- Check the signature of the HTML page against known good versions.

\- Check that the information posted back to the server 'looks encrypted' vs.
plaintext[1].

\- Check the external resources that the page is requesting. Is it grabbing
Javascript files that are unexpected (e.g. trying to serve up a known-good
version of crypto.js, but then overwrite its methods with another Javascript
file).

I'm not sure if many of these things would be possible in Chrome/Chromium, but
probably in Firefox.

[1] Obviously 'looking encrypted' isn't some sort of binary decision, but I'm
guessing there is some amount of checking you could do to see how closely it
resembles random noise. If you sent random noise, and it wasn't encrypted, it
would probably pass this check, but most people trying to protect something
are probably sending something that won't trip this 'alarm.' This is not fool-
proof, but adds a layer of protection when used with other things.

~~~
tptacek
"Looks encrypted" isn't useful; encryption under a known key or with unsafe
parameters "looked encrypted" too.

Are you _sure_ you've captured every case that could influence what functions
are bound to what symbols in the JS runtime?

~~~
pyre

      | "Looks encrypted" isn't useful; encryption
      | under a known key or with unsafe parameters
      | "looked encrypted" too.
    

Those go without saying. It would have to be part of a layered approach, and
would catch stuff like plaintext going out.

    
    
      | Are you sure you've captured every case that
      | could influence what functions are bound to
      | what symbols in the JS runtime?
    

I'm not. I wouldn't trust myself to implement such a thing (at least not
without a lot of peer review from people I trust as knowledgeable), and even
with such a 'detection' plugin, I would be wary of using in-browser crypto.

I'm curious what other inputs into the system you think there could be though.
If you verify the HTML, and the external resources against 'known good'
versions, then what else is there?

\- Maybe there's malware already installed on the client system that's a
threat, but that's a threat to everything, not something specific to in-
browser crypto.

\- A man-in-the-middle attack is mostly mitigated by using SSL (though not
100%).

\- A compromised/malicious server, will end up changing the JS and/or HTML,
which would (hopefully, if you've done a good job) not pass your verification
checks.

\- The other possibility would be a browser exploit that somehow is triggered
before the plugin can raise a red flag about unverified JS/HTML.

\--

The entire point of my posts in this discussion thread was to say that crypto
in the browser vs. crypto on the server may have the same threat model
(trusting the server + SSL), but they are not exactly the same. With in-
browser-js crypto, as the client you have full access to the environment where
the crypto is running. If it's happening on the server, it's a blackbox to
you. This opens the possibility to have software running on the client side to
verify that things are kosher. In the end, by the time that you're writing the
software on the client-side to verify things you may as well just be doing the
crypto in a browser plugin rather than in JS. I realize that it's mostly an
academic argument.

------
oconnore
You have acquired just enough rope to hang yourself with.

------
h8trswana8
I don't see any key being stored client side. So you must be deriving the key
from my password?

If that's the case, then my attack would be:

* Loop through a list of common passwords.

* Derive keys from those passwords.

* See if any decrypted results return an ASCII-like result.

~~~
absherwin
That would work for a sufficiently broad definition of common. You're welcome
to try. The challenge is that it'll take a long time unless you have
incredibly fast hardware or are very good at guessing.

~~~
h8trswana8
Eh, whatever key derivation algorithm you are using is running in a browser,
and isn't throwing any long-running JS exceptions. And I didn't see my browser
block. So it can't be that expensive.

From an academic viewpoint, it's not really strong encryption if your AES key
is derived from a password. Your key space is limited to ASCII characters, and
99% of users will not choose a strong password. So from my perspective, if you
sent me a DB dump, I could read almost everything.

~~~
absherwin
Email me and I'll send you the DB dump. You can brute force it . I don't think
it's as trivial as you suspect but I'd love for you to prove me wrong.

~~~
h8trswana8
It's trivial to brute force for anyone who has a weak password.

More importantly, it's trivial for an adversary who cares.

If I'm encrypting a note containing state secrets to send to a foreign
intelligence officer, the NSA has the technology (and more importantly, the
resources) to brute force their way in.

And if your password is too complex to crack (read: a 256-bit key), you
probably can't remember it either, which means you have to write it down
somewhere; so an adversary who cares would find an outside channel (subpoena,
hack your personal computer) to determine your key.

What is your key derivation algorithm? PBKDF2?

~~~
absherwin
The key is derived by PBKDF2 with 1000 iterations. For weaker passwords, I
suspect a couple of order of magnitude strengthening would be required.

Your point about weak passwords holds in both ordinary clients and in the
browser. It's just a matter of degree. There are plenty of sufficiently strong
passwords that are memorable. Since the degree of weakness tolerable is
logarithmically proportional to the hashing time and JS is usually within an
order of magnitude of native code, the additional entropy required is small
given equivalent hashing time.

------
borski
Clickable: <https://notecrypt.appspot.com/>

------
nwh
Consider using crypto.randomBytes() for your key generation.

~~~
absherwin
SJCL fills its initial entropy pool with crypto.getRandomValues()

~~~
nwh
Apologies, I missed that on first glance. I'm not sure why you'd want to run
your CSRNG through a bunch of additional functions, at best you're doing
nothing, at worst you're reducing the quality of the entropy.

~~~
absherwin
NP. Checking code is a good. It's not the clearest code since the
initialization is located separately from the main RNG function. If you
actually tried to read the inline minified version of SJCL, I'm sorry but
impressed.

------
loumf
Can you explain what this means exactly: "I'll navigate to any URL you send me
while logged into the site."

If I give you a URL what exactly are you going to do?

Also, when logged into the site, are you logged into the account where the
note with claim instructions was made?

~~~
absherwin
I'll paste it into another tab in a browser that has me logged into the
account or I'll log into the account after pasting the link. Whichever you
prefer. If you have another proposal, as long as it's something reasonable,
I'm happy to entertain it.

------
nicksdjohnson
tl;dr: You've set up the challenge in such a way that demonstrating any of the
threat models against which client side crypto is weak would require
compromising other layers of security first that are out of scope for the
challenge.

