Hacker News new | past | comments | ask | show | jobs | submit login

Some of those points are relevant and some aren't. For logging in to a website, "just use SSL/TLS instead" makes sense, but not for this use case. There's better options nowadays for doing crypto in the browser, but I wouldn't be surprised if they were at least theoretically vulnerable to side channel attacks from JS running in another tab.

The main thing is that unless you're paying really really close attention to the JS that you're executing, you can't trust this any more than you can trust Mozilla and the security of whatever computer is serving their pages. I wouldn't use this for sending data that you're trying to hide from a nation-state, but it looks like a great option if you want to send a video to your grandma without posting it publicly on the internet or teaching her how to use GPG.

Followup question:

I have Signal running on my Linux computer and on my Android phone. On the Linux computer it doesn't have root access, but it does have access to its own files, so in theory there's nothing to prevent it from making a network request and updating itself. Additionally, I don't ever check Signal before installing a new update, I just blindly do it.

On my Android device, I also have auto-update turned on, because my only option is to turn it on for every app or none of them. So there's nothing to prevent Signal from updating itself and changing the crypto. If I were on an iOS device, I wouldn't even have that option -- to the best of my knowledge you can not turn off app auto-updates on an iPhone, but maybe someone can correct me if I'm wrong. In any case, it doesn't matter that Signal is updated "rarely". An attacker only needs to install one back door, they don't need to update it a hundred times.

So for an extremely typical user like me, who has been taught for as long as I can remember that the most secure thing you can do on an OS is install updates as they come in when they come in, doesn't Signal have the exact same problems as Mozilla? If someone compromises Signal's servers, can't they add a side-channel just as easily?

In theory, I could disable auto-updates and only update Signal when I looked at the source code, just like in theory I could examine the JS that I'm executing every time I connect to a site. But in practice, I don't.

When I read tptacek's rant nowadays, the immediate thing I can think is, "The web is malleable? Literally every single computing environment and device I own is malleable." It feels like if I were to take tptacek's advice to its logical conclusion, I would just conclude that ETE encryption in general is dead.

> On my Android device, I also have auto-update turned on, because my only option is to turn it on for every app or none of them.

Open the Signal store page and click the dots in the top right of the screen and untick Automatic Updates.

Oh crud, thank you!

I didn't know that, and there are a few apps that I definitely want to use this with. Why on earth isn't this part of the general settings?

If there’s a remote code execution vulnerability, normal users will update but you won’t. If you are voluntarily replacing automatic updates with manual processes, be sure to update Signal before using it each time, or a nation-state can tap a zero-day to infect all the experts who know better than to leave auto-update enabled.

Yes and no, depending on your threat scenario.

I would assume Signal to have a proper signing infrastructure in place, so that the keys used to sign new releases are not available to the server hosting/deploying the actual update files (or providing them to Google/Apple for that matter). So simply taking over that server would not be enough, as malicious updates could not be installed.

Assuming Moxie goes over to the dark side, however, you are screwed. There's nothing stopping your Signal app from bundling all your plaintext messages once you've entered your password and sending them off to China, save maybe a firewall you have in place. Google or Apple might stop such an update during their reviews, but I wouldn't bet on it.

Signing infrastructure does seem like a significant improvement over Javascript delivery, but does that also carry over to platforms like Windows?

Again, please correct me if I'm wrong, but Windows doesn't do anything with signing app updates, does it? Come to think of it, I'm not 100% sure my Linux version has this either, since Signal isn't being distributed as part of the official repos.

If Signal is being updated on Windows without validating any kind of signature, could a compromised server even pull off the "send a malicious payload to only one IP address" attack that people talk about with the web?

While Windows does allow for code signing of executable files in general, I doubt Signal is using their system. The official windows store would probably work similarly to how Apple and Google handle updates, but Signal doesn't use it either.

You can always implement signing yourself, though, without relying on somebody else's infrastructure. Just include the public key in the app itself and use it to verify your updates are properly signed by your private key before accepting them. I haven't checked but assume/hope Signal is doing this with their updated JS packages.

If none of this were to happen, however, then the answer to your last question is "yes", though with a caveat: If Signal's servers are compromised and push out a malicious update, then all bets are off, as the app running on your system has access to all your unencrypted messages. If the compromised server is only one of the messaging/relay servers, however, things are not as bad, as they don't have access to your keys and thus can't decrypt your messages. They can still forward them somewhere else for later decryption, but thanks to perfect forward secrecy this is currently rather unrewarding.

So the takeaway I'm getting from this and a few other comments is that, in general, running automatic updates for most software is still more secure than not, since a 0-day is more likely than a compromised server.

E2E encryption is still valuable, because assuming that the codebase is delivered/signed separately from its app servers, it decreases the number of available attack points. It's usually easier to secure code delivery than it is to secure your entire backend/database. It's even easier than that to secure a private key that you never put on your delivery servers in the first place.

JS has some additional concerns regarding stuff like Spectre and random number generation, but ignoring those for a sec, E2E encryption is in theory viable and valuable on the web, assuming you've split your backend from your code delivery endpoint and are taking extra steps to secure those specific code delivery servers.

But E2E encryption on the web could be improved a lot if we expanded code-signing for browsers. We download code over SSL, but that's just to make sure no one MITMs the server itself. We could, in theory, have some kind of signing system for raw assets that was completely unrelated to the connection to the server -- a "only allow Javascript to download/eval on this domain if it's signed with a key that's not even stored on the delivery server at all" policy. But we don't have anything like that yet.

Is that a reasonable interpretation?

You can absolutely disable app and OS auto-updates on iOS.

This is really the point here. But the danger is always that someone who needs strong nation state secure crypto is used to this and doesnt realize the implications of using this when trying to keep state level secrets.

Indeed, you can rest assured that this will be used to share passwords that should not be shared this way. I would be surprised if it hasn't been already.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact