Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A note taking application that encrypts in the browser (github.com/moyaproject)
58 points by billowycoat on March 8, 2015 | hide | past | favorite | 43 comments



Very cool - the issue with believing that encrypting on the client is safer then relying a secure transport and a secure server is that the code on the client doing that encryption, came from the server. That said, it may still be an interesting layer of security to layer into your application. After all security is more about layers then absolutes.


This is the usual argument in HN when client-side encryption comes up, and I have a question about it.

Doesn't virtually all encryption on user computers come from servers? I mean, Chrome updates itself from a server, and I first downloaded it from the same server. Google (or anyone breaking in to Google) could maliciously break the SSL code in my browser and I'd never even know it had been updated.

Likewise, I downloaded dropbox and putty and ssh.exe from servers, all apps I installed on my phone came from a server and were put there by people I don't know at all.

Why is in-browser encryption considered so different that even smart people like tptacek get all worked up about it? Is it that browser apps get "reinstalled" on every use, and not "only" when an autoupdater or user detects a new version? If so, can I use appcache to achieve the same and enjoy the exact same security as I'd get in an installed app(lication)?

In short, what am I missing?

(note to the angry security people: I'm not dissing you, I really don't know. I'd appreciate non-snarky replies)


> Why is in-browser encryption considered so different that even smart people like tptacek get all worked up about it?

There's two issues that I can think of.

The first is philosophical, and not technological: in all these examples (e.g. this post) doing encryption on the client side gets you nothing. Furthermore, although it is in fact exactly as secure as not having client side encryption, it misleads users into thinking that now the backend can't read their data, etc.

Assume the server operators are 1) competent and 2) trustworthy. Then let them use the standard, time-tested technology of SSL to send data in the clear, and trust them to encrypt it on their end.

If you can't assume (1) and (2) above, remind me why I'm trusting the javascript code they deliver instead?

The point is, with web apps you have to either trust the provider or not. If you do, just use SSL. If you don't, then nothing they can provide via javascript should convince you otherwise. And them providing client side encryption so that you "don't have to trust them" is infuriatingly wrong.

The second point is technological. Let's say the developers are competent, and you understand that you have to trust them. Why then should you prefer that they write the encryption stuff on the backend rather than the frontend? This one I'm not so clear on and would like to hear more. It makes sense that the backend is a more controlled environment, but beyond that I don't know.


> nothing they can provide via javascript should convince you otherwise.

How about javascript that is:

(a) open-source

(b) third-party audited, with a third-party-published hash of the versioned javascript

(c) verified against the hash at runtime by a browser plugin that consults a hash directory of audited versions?


That's great, if it's running in an extension that is only downloaded once and firewalled off from the DOM. Otherwise, every single HTTP request your browser makes can potentially override it.


What design or implementation changes would you recommend for a future browser or HTTP spec, to enable 3rd-party validation of scripts and notification to the user when new, unvalidated scripts are requested?


That (c) honestly sounds to me like it ought to be a web standard, signed authenticated javascript.


(c) would be great. potentially, it works in a very similar way as the CA system, but instead of a CA you have browser vendors who choose to include your JS file as verified... definitely a lot of details in there that could go seriously wrong if the wrong people decided to get involved in defining a process for validating those hashes and pushing them out to the vendors...


In theory, users could choose their own verification authority.


Thanks for your detailed answer. You make a lot of good point, but I'm still not entirely convinced:

> Assume the server operators are 1) competent and 2) trustworthy. Then let them use the standard, time-tested technology of SSL to send data in the clear, and trust them to encrypt it on their end.

What if the developers are entirely trustworthy and almost competent? And, what if there'd surface a JS crypto library that's also pretty time-tested and be considered equally secure as, say, OpenSSL?

The almost competent developers could be relying on entirely standard OSS browser-side encryption library. If they then make a minor fuckup on the backend (say, forget to update OpenSSL in heartbleed-times, or look past that newest Rails vuln Egor Homakov found, or look over an intern-created SQL injection in a code review, etc), then the harm is smaller because there's simply no plaintext ever in memory on the server.

Like the GP said, by doing client-side security, you really do add another layer, right? This of course assumes that the client-side JS is decent, but there's no need for the coders of the app to homebrew that just like they don't tend to homebrew their HTTP server or their SSL layer.

Sure, some of the above-mentioned vulnerabilities could enable a hacker to also inject false encryption JS code but that's still way more complex than just injecting some server-side code that dumps plaintext to pastebin.

I'm interested because I consider myself to be such an "almost competent" developer, at least when it comes to security. If I can add another layer of security by using decent OSS and not becoming a crypto expert then I'm very interested.


Right, whether you use a native or a web app, you have to trust two things: The developer of the app, and the SSL infrastructure (assuming of course you use https).

A rouge developer or a compromized CA can harm you in any case.

The problem with in-browser encryption is when it replaces https. In that case, you have no insurance that the key hasn't been seen/touched by a man-in-the-middle attacker.

What in-browser-encryption on top of https does is it protects you in the case that the (honest!) app provider has a data breach. They never had your plaintext to begin with.


Whether or not this is acceptable depends on your threat model. If you believe your adversary might compromise or coerce the service operator, then you cannot trust in-browser encryption even if it is served over https - the code sent to you could be modified to be malicious, and you have no way to prevent or even detect that this is happening. See the Tor Freedom Hosting [0] incident for an example of how LEA already do this.

So, the inability to guarantee integrity of a web application remains a problem. TLS helps, but falls short if your adversary can MITM TLS or compromise/coerce the service operator. Web applications unfortunately make this a very convenient attack vector, since their code gets reloaded from the server so frequently and remote code execution (RCE) is trivial to achieve on the web platform (XSS, browsers are full of exploitable bugs).

[0] http://www.wired.com/2013/09/freedom-hosting-fbi/


GP's questions are (respectfully) being skirted around. The same reasoning applies to compiled, client-side code. Recent events show that open source code is as vulnerable as closed source. App stores may mitigate things somewhat, but not completely. It's probably easier to verify client-side encryption in a browser than it is to audit a thick client app, no?


> All apps I installed on my phone came from a server and were put there by people I don't know at all.

The apps on your phone can't be updated without notice to you. It's significantly more difficult for the app developer to maliciously update only your copy, and not every copy. Updating everyone's copy has higher much risk.

> Is it that browser apps get "reinstalled" on every use, and not "only" when an autoupdater or user detects a new version?

Yes. You might not be able to reverse engineer the Dropbox client, but you can download it from several places and see if it's the same.

If I'm a developer who releases some software version 1.2.3 with a trojan in it, I can't easily take it back, making the risk of detection significant. You have stuff like the NIST NSRL out there compiling databases of hashes of all the software they can get their hands on. You have security researchers reverse engineering patches looking for silently fixed vulnerabilities.

With a web page it's really easy to serve the trojan once to a specific user. The risk is very low. Hushmail did it, for example.

The fact that operating system vendors have the power to trojan us at will is a huge problem. And thanks to online activation many users are easily identified for targeting. I don't think many people want to talk about it because it's so difficult to fix.


The big issue, as you note, is that you download normal software once, while with a website you download it every time you visit (modulo caching, of course). With free software, you could download it, audit it, and then feel secure using it for a decade or more; with in-browser crypto you're highly unlikely to load the site, audit the code, then use it every time you use it.

Moreover, this constant redownloading enables user targeting: probably no-one will notice if Google serves a slightly different piece of JavaScript to one user.

That's one of the beauties of signed free software: it's public, and many people can collaborate on ensuring that it's secure.

I don't think that the appcache applies to this case because it's under the web server's partial control.


I came across this issue when I was implementing an ssh client in browser (https://www.minaterm.com). It's absolutely correct of course, but interesting that it's an issue in 2015.

There should be a way of having the browser (outside of the JS) verify the hash/signature of a page against an external repository which would verify that this page had been independently audited.

It's interesting that it's relatively easy to do this with native client software (hash/signature, manually check it), but less so with browser based applications.


Yeah but "in the browser" is not "on the client". Do we have browser internal encryption engines yet? Encrypting in the JS has the problem that someone trying to read your nodes could MITM you other JS files, that allow him to read your clear text notes.


The point is that the web is very geared towards "Seamlessly get the latest content from the server without bothering the user".

That's not necessarily what we want in the land of crypto. We want something more like "Offer to get the latest code from the server, but ask the user for permission first, because maybe the server has been compromised."


This is true. And you can't be expected to verify the code every time you use it.

It's more of a self-hosting solution I think.


If you selfhost then you trust the server anyway. Can it be useful for forward secrecy if the server gets compromised later?


Having the client page hosted at something like IPFS would solve this problem. The content of the webpage (and all its javascript references) would be the same whenever you had the same address, so you could be sure the code was safe.


You may have seen this: http://matasano.com/articles/javascript-cryptography/

The brief problem statement is:

Either you're on SSL, in which case, you have encryption from the client to the server.

Or you don't have SSL, and a MITM attack can substitute your javascript with anything, rendering the scheme unsafe.


Or you do both. If your using SSL you still have to trust the server, or anyone who gets access to it.

Encrypting in browser at least means the server doesn't intensionally have access to the plaintext. That means if the server is stolen/ceased the plaintext can't be accessed.

However you are right of course that it doesn't protect you against a malicious agent who has access to the server over a period of time.

It seems like it should be theoretically possible to do that though, but it would require browser capabilities that we don't have at the moment.


> However you are right of course that it doesn't protect you against a malicious agent who has access to the server over a period of time.

Or against starting with a malicious server.


I love a lot of the outreach that Matasano does, but I strongly disagree with that article. Most of the points they make aren't fundamental problems of websites - they're just issues for poorly implemented websites. Some of it is out of date / plain wrong - for example, we've had a SRNG (window.crypto.getRandomValues) on the web since Chrome 11.

Most of the rest of the complaints you could also reasonably level against installed apps with update mechanisms. The article compares web apps (and their update mechanism) with desktop apps (without their update mechanism). Then it points out flaws in update mechanisms in general (eg they can send you malicious code), then says the thats why the web is flawed. Yeah, nice try.

The fundamental question is: How do you trust the code that I give you? No matter what platform you're on, at some level, you need to trust me. Lets say I'm writing a 'secure' todo list app. You have to trust that I'm not going to forward your todo list entries to any hooded figures. And I'm not going to change your shopping list to quietly add entries from my sponsors. Also both on the web and locally, apps can open mostly-arbitrary network connections and send any data anywhere we like.

Its as simple as that. On the web, I send you code, you run my code, my code does something useful and might betray you. In native apps, I send you code (maybe via an app store or something). You run my code. My code does something useful but it might betray you.

As far as I can tell there's only two fundamental weaknesses of web apps:

1. The JS isn't signed 2. The JS gets sent on every page load

The combination of which makes it much more convenient to do spear-phishing type attacks. But that said, any threat that looks like "but on the web you might send malicious code to user X" is also true of other app update mechanisms. Even on the iOS App Store, nothing is stopping me from writing code which says `if (hash(username+xyz) == abc123) { send data to NSA(); }`. I can't think of any binary-downloading systems (app stores, aptget, etc) which would discover that code.

And remember - desktop app code is potentially much more dangerous. Desktop apps can take photos with your webcam, record audio, record keystrokes and access every file in your home directory.

Its definitely true that most web apps are poorly implemented - they dynamically load 3rd party JS and they don't use SSL / HSTS. Its also embarrassing how many desktop apps have simple buffer overflow exploits. But the solution isn't to go back to desktop apps - the solution is to push for better best practices on the web.

In my opinion, the biggest security problem with the web is that most web apps store all your data in plaintext on someone else's computers. This is a problem that we we need to start addressing systematically via projects like CryptDB. Ie, we need more serious security work done on the web. Not less.


> But that said, any threat that looks like "but on the web you might send malicious code to user X" is also true of other app update mechanisms.

Traditionally, app updates don't happen without the user's knowledge. I'm not entirely certain what the case is with the app stores. Point is, you can easily choose to stick to a specific version with, at least, most desktop applications. You only have to verify that one version.


Practically speaking, what verification are you talking about? Who do you imagine is verifying the security of your application binaries?

I'm confused by the argument that slower app update mechanisms lead to a more secure platform. The reverse is clearly true sometimes - when a vulnerability is discovered, updating an app quickly is important. When would low latency deploys lead to less secure code?


> Practically speaking, what verification are you talking about? Who do you imagine is verifying the security of your application binaries?

I can build it from source where any number of people may have verified it, or otherwise, I can trust the creator/packager/distributor now but not have to continue trusting them every time I access the app.

> When would low latency deploys lead to less secure code?

When any number of entities are either taken over by, decide to collaborate with, or become, an attacker at some point after you first download the application.


You forget one case: Here, the server will never know the content of the encrypted data.


Unless the attacker has access to the server for a period of time, in which case they just replace the JS with their own code.

Seems like there should be a way to sign the JS offline (away from the server) and prevent that, but I don't think there's any browser mechanism for that. I think that would be neat though.


Unless it is the one delivering you faulty javascript. It does help if someone gets only read access to the server though (hacked user account, stolen backup, confiscated server...)


but cant you perform a MITM on SSL anyway?


It's possible, but usually prohibitively difficult. (Superfish was scandalous because made MITM attacks easier on affected computers.)

TLS and SSL use X.509 certificates[1] to identify who you're talking with. They're cryptographically signed by a Certificate Authority (CA), who must be one of a list of CAs stored on your computer.

A MITM attack requires something rather difficult, like breaking the cryptography algorithm, access to the CA's private key, access to install a new CA certificate on your computer, or an insecure CA already on your computer (like Superfish).

Edit: As jacques_chester reminded me, it's not difficult for the owner of computer to set up a MITM server, but I don't call that an "attack".

1: http://en.wikipedia.org/wiki/X.509


I'm working on a client project right now. Their corporate firewall MITMs all SSL traffic. Their standard OS images all have SSL and root CA certificates installed to make this happen invisibly.

The client staff we're working with had no idea this was happening.


Yeah, not many people know about the CA store, and even if you do, and check it, there are so many pre-installed certificates that it's easy to miss a new one.

No one at the company had tried to use the network with a computer they bought for themselves?


Their VPN is very restrictive (no Google, several CDNs are blocked so half the internet is broken) and hard to configure. So they just use the supplied machines.

They learnt it from us because we have our own machines.


Turning the client-side code into a browser extension would lessen the chances of MITM attacker tampering with the code.


In case people are interested in the under-the-hood. I just dug in a little bit and it looks like a library is used that automatically generates IV for each encrypt, and automatically uses your passphrase, passed through EvpKDF as the key.

SHA3, Rabbit, and whatever EvpKDF does, I don't actually have time to look at that.


On binbox.io, we use a similar library called Stanford Cryptograph JavaScript Library. It's fast enough that small files decrypt and encrypt within reasonable times, but it scales poorly to large files.


Ask me anything about the implementation. I'm no expert on encryption / security so there could be holes.


Could be interesting to allow multiple pre-existing wiki/CMS systems to serve as backends.

(Potentially, then, you only bring the overlay JS from a trusted source, and your encrypted notes can live in many places, as visible-but-uninterpretable 'noise' in other systems.)


cool! I did something similar with my side project. Users could encrypt/decrypt text in an editor, and would have to provide a password to decrypt it on the display page.

Yes, its not 100% secure - you have to trust the JS, which with enough targeted effort could be swapped out for something malicious. But this does mean your data at rest is encrypted, which can be very beneficial.

A bullet proof vest doesn't stop all bullets, nor does it protect everywhere. But people still use them all the time in addition to other solutions.

Still, with all that if you end up with a keylogger on your machine it means nothing. Air gaping helps a lot but really nothing is totally foolproof - one can only make it exponentially more difficult to bypass.


"This note is about a fizzy buzz that went away to become a knight. No one shall now what the rabbit ate tonight."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: