
What's wrong with in-browser cryptography? - bascule
http://tonyarcieri.com/whats-wrong-with-webcrypto
======
woah
I think it's obvious to everyone that code coming directly from the server can
not be trusted completely by the end user. The only solution that might work
is signed browser extensions. I really would have liked to learn about more
considerations when taking this route.

~~~
bsdetector
However code coming from the server can be verified, so you can know if the
cryto.js delivered to you was modified from a known good version. You can
audit a particular version and say, "this works correctly", and then look at
changes over time.

For instance you can have a greasemonkey script that tells you when the crypto
code is different from last time.

The best solution realistically would be SSL _plus_ encrypting data first in
the browser (separately encrypt the data and the transmission of it).

~~~
simbolit
sorry, this is only somewhat related. but i am really ignorant. this is a
genuine question that i have for quite some time now.

when i download security relevant software, the website usually gives me a
hash/signature of the download file, so i can verify afterwards whether the
data i downloaded really is the file i want or whether it has been tampered
with.

but in case an attacker switches the downloaded file for a malignant one, why
should s/he be so stupid and not also switch the hash/signature on the
website? i am positive i am missing something, as really smart people [0] are
doing this, but i do not get it. thanks in advance.

[0]
[https://www.torproject.org/projects/torbrowser.html.en](https://www.torproject.org/projects/torbrowser.html.en)

~~~
michaelt
The signature is generated using public key cryptography [1] meaning, to
generate a valid signature, you have to have the private key.

Because the signature only has to be generated when a new version of the file
is released, you can use the most bothersome (and effective) manual methods of
security, like keeping the private key on an air gapped [2] computer.

So if an attacker manages to switch the downloaded file for a malicious one,
they may still not have got access to the private key used for release
signing.

Of course, the attacker could still remove all links to the signature file and
all mention of it in the public documentation, so downloaders wouldn't know
there was a signature to check. So it's better if the site isn't compromised
in the first place.

[1] [https://en.wikipedia.org/wiki/Public-
key_cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography) [2]
[https://en.wikipedia.org/wiki/Air_gap_%28networking%29](https://en.wikipedia.org/wiki/Air_gap_%28networking%29)

~~~
ssk42
So this will also probably be another seemingly clueless question, but after
reading up on air gapped computers, how would that help with the new hash?

------
benmmurphy
The big problem with most (all?) in-browser cryptography is you still need to
trust the server. If you have to trust the server what is the value of doing
crypto on the client side. You may as well do crypto on the server side and
communicate between the server and the client using SSL.

~~~
nadaviv
You pretty much have the same issue with any desktop software - you have to
trust the authors and the server that gives you the download files (or the
server(s) that give you checksums and public keys to verify the download).

Doing in-browser crypto requires less trust than server-side crypto. With in-
browser crypto, the server would have to do an active attack and serve
malicious scripts, which could be detected. When everything happens in the
server all the time, it would be a passive attack that doesn't require any
client-side modification and could easily go undetected.

Also, with open-source websites, people could audit the source code and ensure
it works as advertised. Combined with a browser extension that verifies the
source code from the website matches the code on the public code repository,
this could make it much easier to trust websites to do in-browser crypto
right.

------
danielharan
I have a problem that is best solved by encrypting data in the browser -
because the server should not be trusted with seeing the information in the
clear.

This is much like the exception the OP mentions at LivingSocial. Can anyone
weigh in on whether that's safe?

~~~
orthecreedence
Safe(r) than just sending it in the clear so the server can read it, probably.
However, if your server gets compromised, the JS code doing the crypto can
easily be replaced to just return the results plaintext...in which case you're
still doing SSL, so good job, but the CC#s are vulnerable to whomever wishes
to sniff them on the server, which is pretty much how most online systems work
anyway.

Couldn't hurt to use asymmetric encryption using the provider's public key,
but just beware that without packaging/signing the crypto code, it's a tossup
whether or not it's actually there.

Edit: if it's absolutely imperative that the server not be able to read the
message, then not packaging _all_ the code is _unsafe_. However, if the server
not being able to read the message is nice to have but non-essential, then by
all means, run your code in a webapp.

~~~
danielharan
Thanks. I'd use a browser extension, and also constrain what gets sent to the
server.

If the server was really compromised, it could just copy the CC# into another
field in the form (so it gets sent twice, encrypted and unencrypted) - and the
server would get the data without having to change the JS. Integrity checks
would not turn anything up. A properly paranoid browser extension should have
a specified format for sending a form, so that no extra information is leaked.

In my case it's bids rather than CC#s, though the same pattern holds of
encrypting with the tender creator's public key.

------
ballard
If you were to install some packages on a system, would you install unsigned
ones that are allowed to modify every other package, altering how other
programs work, without confirmation? That's effectively the state of browser
security right now.

We must stop living in this fantasy playground of pretending no security in
the browser is acceptable, because important things like passwords, PIN
numbers and credit card numbers pass through it.

An ability to sign (not hashes, _signed_ ) and verify JS, CSS and HTML would
be a start. Once verified, a security policy* is applied further sandboxing an
app so it doesn't go to shit with modifications or inappropriate
introspections by injected or malicious code (while usually retaining normal
things like the browser and plugins). Also reasonable limits to mark objects
and parts of the DOM as only available to certain objects, not Orwellian
mandates to throw webdev into chaos.

This is something that would take a great deal of coordination with reluctant
vendors that see change as cost, but it would be the biggest win for browser
security.

* Something a bazillion times easier to use than SELinux.

~~~
kybernetikos
> An ability to sign (not hashes, signed) and verify JS, CSS and HTML would be
> a start.

This apparently exists, although I've never managed to get it to work myself:
[http://www-archive.mozilla.org/projects/security/components/...](http://www-
archive.mozilla.org/projects/security/components/signed-scripts.html)

------
orthecreedence
Sensible post. Yes, doing crypto in the browser is generally harder due to not
having direct access to OpenSSL and due to XSS and many other javascript-
related vulnerabilities. But if you cover your ass, don't eval code willy-
nilly, and package _everything_ into a browser extension (not just your public
key or 90% of your JS, but _everything_ ) you can mitigate a lot of potential
threats. Also, don't try to reinvent SSL. If you do your own crypto in the
browser, it should be for peer to peer, not peer to server.

With something like Firefox, you're kind of screwed. For instance, if I open
firebug while using my Turtl extension, I can read the contents of memory
right there...there's no real separation/sandboxing between extensions. If one
extension can read the contents of another, you've got problems. Chrome does
this better.

The best way is to package everything into a separate (os-level) app.

~~~
oakwhiz
>The best way is to package everything into a separate (os-level) app.

I agree with this on the premise of threats like XSS, however there is a human
aspect of security which might be overlooked. The problem with apps is that
everything that is a website today will become a native app tomorrow. App
developers might claim that their app is more secure than using a website to
do the same thing, because of the included native crypto implementation. Users
might then associate any native app with having greater security than a
website accessed through a browser (which may not necessarily be true.) Users
might become further desensitized to installing everything from the internet
as an app.

Why is this a problem? Websites normally cannot access things like your
personal files, without you explicitly choosing a file from your computer
using a file dialog, whereas if you run a native app with your user
credentials, it is immediately able to access any file that you have
permission to access, without necessarily informing you.

Perhaps OS-level security and sandboxing is going to have to improve for this
sort of thing to be a good option.

~~~
orthecreedence
I can see this side of it as well. I tend to come from the camp that if you
don't release your code as open-source, you can't _really_ call yourself
secure. If you're 100% open-source, distributing a desktop/mobile app will
have enough transparency to determine "is it going to steal my CC numbers?"

There's also a barrier to entry to get a user to install anything. Mobile is
different, but in the desktop world, if I told my users to download the "CNN
desktop app!" they'd roll their eyes because why would anyone install some
trashy malware-ridden program when they can just look at the website, for
free, and as you put it, much safer? From my perspective, the only reason to
distribute a "secure" desktop app version of your webapp is _if you don 't
have a webapp to begin with because it's not secure to do so_. So the desktop
app would be an open-source complement to the browser extensions your secure
app uses.

~~~
oakwhiz
The products where users seem to preferentially install native apps are
services like Dropbox, or video games, where there is value added by not using
the "website version." Something that would be very revealing is to
investigate how secure these kinds of apps are today. I find it unlikely that
in the near future, native app security practices will change very much.

------
nadaviv
With Bitrated, I'm trying to resolve the main issue he's raising by creating a
browser extension that verifies all the content served from the webserver is
properly digitally signed and that it matches the source code repository on
GitHub.

("Browser extension" at
[https://www.bitrated.com/security.html](https://www.bitrated.com/security.html))

This two-factor verification helps protect against attacks, both by a third
party attacker and by the service operator itself.

------
cybernytrix
Anyone interested in working on a man-in-the-middle resistant version of
browser/Javascript encryption? I have the basic idea described here:
[http://www.research.rutgers.edu/~ashwink/ajaxcrypt/index.htm...](http://www.research.rutgers.edu/~ashwink/ajaxcrypt/index.html)
I have working code, but it needs some refactoring.

------
ams6110
I have this feeling that one day we'll look back in a sort of amused horror
that people used to let their browsers run code they downloaded from unknown
servers. But I'm not really sure what we'll be doing instead of that.

~~~
Dylan16807
You can rather easily make a verified kernel that executes foreign code
safely. The trick is in also making it fast and featured.

~~~
joveian
IMO, this is really important and system application security is not in much
better shape than web browser security; you still need to trust a huge number
of sources and these sources have known security issues regularly. For that
matter, it is probably not that difficult to insert known vulnerable code into
some widely used application without it being detected. While there will
always need to be some trusted code, it is possible to limit the amount of
trusted code and the number of sources that code comes from given better OS
security models.

Since hardware and firmware can also be subverted, IMO a new security model
should also be able to track and limit what network traffic is intended so
that the network traffic actually generated can be (potentially) verified by
additional machines on the network and/or by other verifier virtual machines
on the same system.

------
salient
Could Dart or other languages in the browser solve this problem of not being
able to do crypto properly in the browser, or is this a more fundamental
problem of having crypto inside a browser in the first place?

~~~
skybrian
The fundamental issue is that the code needs be audited and signed offline so
it can't be changed. Unsigned code should not be run. This makes releases
harder (no push on commit).

It's not language-specific. App stores already work this way (including the
Chrome web store), so a deployment mechanism is there; it's just not used as
often as it should be.

It's funny, back in the day we debated ActiveX (code signing) versus sandboxes
(Java and JavaScript) and the real answer is that you should have both (as
Android does).

But code signing isn't magic either. You still need someone to do the
auditing, and users to be careful about what they run. Since users are so
trusting, I'm not sure that app stores in practice are any more secure.

~~~
vesinisa
How does this differ from HTTPS? Since we're dealing with World Wide Web, it
can't be that you would have to install the public key of _every_ site that
you visit to get signed versions of the code – it would make browsing
infeasible.

It just boils down to the CA based system that is already employed by HTTPS.
You must of course trust the other party and their systems, but that's the
exact same with code signing. To identify the other party, code signing uses a
pre-shared secret (e.g. your OS comes pre-installed with their public key). In
HTTPS you identify the other party by them possessing the right certificate,
that's signed by a trusted intermediary. There's no really better way to do
it, bar quantum crypto and HTTPS+DNSSEC, which is sadly not widely supported.

~~~
skybrian
The difference is that if a server running HTTPS is hacked, the attacker can
modify the JavaScript at will. With an app that's signed offline, this isn't
possible; you could even redistribute it on an untrusted mirror.

Also, in theory, an independent auditor could rebuild the app from source and
verify that the bits match, and do an independent audit of the source. So
signed apps are more verifiable.

The trusted base for a signed app is (browser + app + app signer), not
(browser + app + server), where the server might be a virtual machine in the
cloud and you need to trust the virtual machine host provider too.

This doesn't matter most of the time because you have to trust the server
anyway, but in the case of someone wanting to encrypt something on the client
that cannot be decrypted on the server, it does matter. Encrypting on the
client is mostly pointless unless the client code is independent of the
server, which requires it to be independently signed.

There is still a cert chain of course, but it's a different one where the
developer's private key doesn't get uploaded to the server.

Signed apps are fundamentally not how the web works. The argument here is that
the web is basically broken for client-side encryption and the app store model
is better for things like secure email or a bitcoin wallet.

But since most app store apps aren't open source and the open source ones
aren't independently audited in practice, it's not clear it's a practical
difference.

------
tnash
I think the OP makes some good points, but I dislike when people are put off
of working on their own crypto. Can crypto be difficult to implement?
Certainly. Is it impossible? No. The OP makes it sound like properly
implementing and authenticating a block cipher is way too difficult for the
average programmer, when it is really rather trivial.

~~~
VLM
For those who can't read sarcasm over the internet, he's kidding. Go ahead,
implement ECB mode, what could go wrong, LOL. I've also got a really nice
"special" RNG for you too, although its really slow it is NSA approved.

If you only need security theater just stick to ROT-13.

As a tiny little sub area of computational activity, writing crypto is total
sorcerers apprentice territory.

