Hacker News new | past | comments | ask | show | jobs | submit login

In the not so recent past, HN'ers loved to quote tptacek's legendary rant about how in-browser JavaScript crypto is fundamentally broken[0].

What changed? Is that rant finally outdated? Couldn't Mozilla at any time serve a corrupted JS bundle (with or without their knowledge) which would leak the key somewhere, silently replace the encryption by a noop, etc?

I ask out of interest, not skepticism. I much prefer an internet where we can trust web apps to do proper crypto than one where we have to depend on some app store to somewhat adequately protect us.

[0] https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...




Some of those points are relevant and some aren't. For logging in to a website, "just use SSL/TLS instead" makes sense, but not for this use case. There's better options nowadays for doing crypto in the browser, but I wouldn't be surprised if they were at least theoretically vulnerable to side channel attacks from JS running in another tab.

The main thing is that unless you're paying really really close attention to the JS that you're executing, you can't trust this any more than you can trust Mozilla and the security of whatever computer is serving their pages. I wouldn't use this for sending data that you're trying to hide from a nation-state, but it looks like a great option if you want to send a video to your grandma without posting it publicly on the internet or teaching her how to use GPG.


Followup question:

I have Signal running on my Linux computer and on my Android phone. On the Linux computer it doesn't have root access, but it does have access to its own files, so in theory there's nothing to prevent it from making a network request and updating itself. Additionally, I don't ever check Signal before installing a new update, I just blindly do it.

On my Android device, I also have auto-update turned on, because my only option is to turn it on for every app or none of them. So there's nothing to prevent Signal from updating itself and changing the crypto. If I were on an iOS device, I wouldn't even have that option -- to the best of my knowledge you can not turn off app auto-updates on an iPhone, but maybe someone can correct me if I'm wrong. In any case, it doesn't matter that Signal is updated "rarely". An attacker only needs to install one back door, they don't need to update it a hundred times.

So for an extremely typical user like me, who has been taught for as long as I can remember that the most secure thing you can do on an OS is install updates as they come in when they come in, doesn't Signal have the exact same problems as Mozilla? If someone compromises Signal's servers, can't they add a side-channel just as easily?

In theory, I could disable auto-updates and only update Signal when I looked at the source code, just like in theory I could examine the JS that I'm executing every time I connect to a site. But in practice, I don't.

When I read tptacek's rant nowadays, the immediate thing I can think is, "The web is malleable? Literally every single computing environment and device I own is malleable." It feels like if I were to take tptacek's advice to its logical conclusion, I would just conclude that ETE encryption in general is dead.


Yes and no, depending on your threat scenario.

I would assume Signal to have a proper signing infrastructure in place, so that the keys used to sign new releases are not available to the server hosting/deploying the actual update files (or providing them to Google/Apple for that matter). So simply taking over that server would not be enough, as malicious updates could not be installed.

Assuming Moxie goes over to the dark side, however, you are screwed. There's nothing stopping your Signal app from bundling all your plaintext messages once you've entered your password and sending them off to China, save maybe a firewall you have in place. Google or Apple might stop such an update during their reviews, but I wouldn't bet on it.


Signing infrastructure does seem like a significant improvement over Javascript delivery, but does that also carry over to platforms like Windows?

Again, please correct me if I'm wrong, but Windows doesn't do anything with signing app updates, does it? Come to think of it, I'm not 100% sure my Linux version has this either, since Signal isn't being distributed as part of the official repos.

If Signal is being updated on Windows without validating any kind of signature, could a compromised server even pull off the "send a malicious payload to only one IP address" attack that people talk about with the web?


While Windows does allow for code signing of executable files in general, I doubt Signal is using their system. The official windows store would probably work similarly to how Apple and Google handle updates, but Signal doesn't use it either.

You can always implement signing yourself, though, without relying on somebody else's infrastructure. Just include the public key in the app itself and use it to verify your updates are properly signed by your private key before accepting them. I haven't checked but assume/hope Signal is doing this with their updated JS packages.

If none of this were to happen, however, then the answer to your last question is "yes", though with a caveat: If Signal's servers are compromised and push out a malicious update, then all bets are off, as the app running on your system has access to all your unencrypted messages. If the compromised server is only one of the messaging/relay servers, however, things are not as bad, as they don't have access to your keys and thus can't decrypt your messages. They can still forward them somewhere else for later decryption, but thanks to perfect forward secrecy this is currently rather unrewarding.


So the takeaway I'm getting from this and a few other comments is that, in general, running automatic updates for most software is still more secure than not, since a 0-day is more likely than a compromised server.

E2E encryption is still valuable, because assuming that the codebase is delivered/signed separately from its app servers, it decreases the number of available attack points. It's usually easier to secure code delivery than it is to secure your entire backend/database. It's even easier than that to secure a private key that you never put on your delivery servers in the first place.

JS has some additional concerns regarding stuff like Spectre and random number generation, but ignoring those for a sec, E2E encryption is in theory viable and valuable on the web, assuming you've split your backend from your code delivery endpoint and are taking extra steps to secure those specific code delivery servers.

But E2E encryption on the web could be improved a lot if we expanded code-signing for browsers. We download code over SSL, but that's just to make sure no one MITMs the server itself. We could, in theory, have some kind of signing system for raw assets that was completely unrelated to the connection to the server -- a "only allow Javascript to download/eval on this domain if it's signed with a key that's not even stored on the delivery server at all" policy. But we don't have anything like that yet.

Is that a reasonable interpretation?


> On my Android device, I also have auto-update turned on, because my only option is to turn it on for every app or none of them.

Open the Signal store page and click the dots in the top right of the screen and untick Automatic Updates.


Oh crud, thank you!

I didn't know that, and there are a few apps that I definitely want to use this with. Why on earth isn't this part of the general settings?


If there’s a remote code execution vulnerability, normal users will update but you won’t. If you are voluntarily replacing automatic updates with manual processes, be sure to update Signal before using it each time, or a nation-state can tap a zero-day to infect all the experts who know better than to leave auto-update enabled.


You can absolutely disable app and OS auto-updates on iOS.


This is really the point here. But the danger is always that someone who needs strong nation state secure crypto is used to this and doesnt realize the implications of using this when trying to keep state level secrets.


Indeed, you can rest assured that this will be used to share passwords that should not be shared this way. I would be surprised if it hasn't been already.


It's not outdated; it remains fundamentally true. But I'm uncomfortable with people calling it a "legendary rant" because it was dashed off and I never promoted it as any kind of last word on the subject. There are better arguments against browser cryptography than mine.

In particular: you'd hope that WebCrypto would have changed things a bit, but, of course, it doesn't: it leaves all the cryptographic joinery up to content-controlled code. You're probably somewhat less likely to have side-channel flaws if you use it, but in reality timing side-channels are more talked about than seen. Who would bother, when they can just deliver a script that exfiltrates secrets directly?


You have said a bunch of useful stuff in HN comments that people end up pointing to, but in those comment rants you also have a tendency to leave things hanging or allude to things without further explanation (I think for fear of being boring), or to assume people understand the context of a long-running debate.

I think you should consider hoisting more of this stuff out into standalone blog posts that you can flesh out and also update as circumstances warrant. I don't think I'm the only one who has learned a lot from reading you, but often felt myself wishing it had been dumbed down a shade for beginners.

Maybe the best argument for it is that blog posts remain mutable and you can add and expand as necessary, unlike these HN posts that are frozen in amber.


This place has basically ruined me for writing. I used to sort of know how to do it! The idea of writing a top-to-bottom "browser Javascript is evil" post is intimidating to me now. It was intimidating when I wrote the post referred to above! And that one wasn't even good!

I'll work on it.


One idea is to get a volunteer or hired goon to simply collate your HN posts and post them somewhere editable. Then when you read them over, you'll be horrified and the editing instinct will kick in.


I don't think it's you who made it legendary. I think it's the HN commenters who keep linking to it who did that (myself included, since yesterday).

And, well, you may disagree but to me it definitely reads like a proper rant :-)

Please note that I chose the words "legendary rant" with all the love imaginable and I had hoped you'd interpret it as nothing other than a compliment. I much appreciate your contributions to HN and the internet as a whole.


>There are better arguments against browser cryptography than mine.

mind pointing to or sharing them?


SubtleCrypto is a new browser-adopted spec for performing crypto operations natively. For example, instead of using Math.random() for random number generation, you can use https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getR... in combination with the SubtleCrypto functions to work with keys securely

Your points around a compromised JS bundle are still possible but that has more to do with a company’s deployment/change management setup than JS itself imo


> Your points around a compromised JS bundle are still possible but that has more to do with a company’s deployment/change management setup than JS itself imo

But that's the only point I intend to address here. If Pascal had been the language of the web then my question would have been about Pascal.

Therefore I don't see how SubtleCrypto changes matters much.

In short, if I get it right, the argument would be that in eg a mobile app, all the e2e logic (the core crypto plus the code around it) go through peer-review, then some release management process, then some review by Apple or Google, before it lands in my hands via their app stores' well secured delivery mechanism. In a web app, a single compromised server will compromise all security instantly. Generally I'm fine with trusting Mozilla's servers, but if I have to trust their servers then what's the point of end to end encryption?


> In a web app, a single compromised server will compromise all security instantly.

This is only true if the server has access to the keys of your data. E2EE typically means that it doesn't, only you do.


In a browser, the server serving the JS has an opportunity to access the keys.


This is the case with all E2EE tools. You have to trust that they do their crypto correct and that they aren't evil. As Firefox Send is open source you can setup your own server if you don't trust Mozilla, but then again, if you don't trust Mozilla you might want to eyeball their code carefully first...


Your description is very simplistic, but yes, you have to trust the code that's delivered to you. For example, no Android/iOS user would check every single update to E2EE apps they install for backdoors. However with web, there's an opportunity for a backdoor in every single request and the server can ship different code to different users. In my opinion, using web cryptography is still worth it, but it's definitely more risky than native apps.


subtlecrypto has API to generate a keypair that you can't extract and access from the JS side. You can only use it to encrypt/decrypt buffers, but not access the key itself.


Sure, but having access to encryption/decryption/key derivation is pretty much equivalent to having the key in most circumstances. Plus, JS generates the key and sets "extractable" flag.


And how does that work with Firefox Send? Isn't the key somehow in the payload or the URL?


Without knowing almost anything about Firefox Send, it does seem that the key is embedded in the link you give to your friends. In that case E2E means that the key is not stored on the server. In order to guarantee that, the link is probably in two parts, one that identifies the file on the server and one is the key. The key part of the link is probably generated on your machine and thus never sent to the server (to prevent it being in any logs or what not). So if Mozilla's servers are compromised the attacker still would not be able to decrypt your files. Of course if the server is compromised the attacker could serve up malicious JS for future uploads.


If it works, it prevents mass surveillance and makes insider attacks much more difficult.


Didn't realize it had full support by every browser, even ie: https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getR...


The problem with that table is that this only lists the entry methods, which are supported in all browsers. However, the actual work is hidden behind parameters, not all of which are supported by all browsers, and some have to be in weird combinations. One example is that Edge does not support PBKDF2 in any form, which makes many of their further support a bit weird to use.

Here's a site where you can test your browser's compatibility with many combinations: https://diafygi.github.io/webcrypto-examples/


I think it uses a cryptofill shim for browsers that don't support all of the crypto api


The SubtleCrypto portion of the API is slightly less supported in that it appears to have spotty/non-compliant IE and Edge coverage.[1]

1: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...


That article primarily comes down to this:

> WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT CRYPTO CODE? You can. It's harder than it sounds, but you can safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography.

In our case we aren't doing crypto inception where the cryptography is meant to secure itself. The crypto is being served securely (by ssl) and then used to solve the separate unrelated crypto problem of encrypting random files.


That question seems closely tied to

> WHAT'S THE "CHICKEN-EGG PROBLEM" WITH DELIVERING JAVASCRIPT CRYPTOGRAPHY? If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code.

I haven't looked at the details of how Firefox Send works, but if you can download and decrypt the file with nothing more than an https:// URL, it seems like you'd have to trust the server, either to handle the cleartext or to provide trustworthy code to handle the cleartext.

I suppose an alternative would be to generate a data: URL, but if it has to include all the crypto code, I wouldn't expect it to be nice and compact.


> I suppose an alternative would be to generate a data: URL, but if it has to include all the crypto code, I wouldn't expect it to be nice and compact.

Sounds like a challenge for the code golfers.


Fundamentally the situation has not changed much. You redownload the code every time and servers could deliver tailored compromised versions if ordered so by some TLA. Which means audits have limited values since they can't attest that what they have seen is actually what anyone gets.

Compare with native tools which you only download once, can check its signatures and which strive for reproducible builds so that multiple parties can verify them independently.


There was also a time just a few years ago when evangelists claimed JS/CS/etc. were just as fast as native (some said faster) and blasted you for suggesting otherwise, even when it was clear as daylight this was blatantly false. This mantra also suddenly just faded away once native compilers for these gained popularity. I guess reality hits you after some time.

Now I see a similar issue with security experts preaching that merely possessing a single piece of software with a single thing they classify as a 'vulnerability' implies you will be murdered within the next 24 hours, and it seems they'll happily DoS your computer, get you fired from your job, take your second newborn, and blow up your computer in your face if that's what it will take to make you finally feel real danger. Not sure why it takes people so long to see that reality isn't black-and-white, but better late (hopefully) than never.


There are many use cases where compromise through code-interdiction after warrant is a perfectly acceptable risk. Also considering what it replaces may further increase the weight of privacy gain. Absolutism is definitely not the way to go, and looking at the state of the tech community (eg. npm, apt, pip, pacman, check that sha256 sum) we left the design-it-right-first a long time ago. A valid argument, though I wouldn't defend it to the death, is that we need to work slowly back toward more secure behaviors rather than chasing absolutely secure technologies. I think send.firefox is a step back from dropbox for some.


SSL isn't the only crypto you'd ever want to do though. What if you want encrypt data so that it is encrypted all the way through the layers of the application to the database? That's a valid use case to use in tandem with SSL. Also I have to mention cryptocurrencies.


As long as there is a possibility, I say yes - not "if" but "when."

Humans are always the weakest link with the internet and someday, sometime, bad code (unknowingly) will be pushed and something will happen to someone.


Couldn't they do the same If crypto code was on server?


Seems like Send would have to be a built in browser functionality or maybe a plugin.


Not relevant to me as all of my sites are entirely secured with SSL.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: